In computer graphics, why is everything an approximation? - graphics

While going through some computer graphics exam questions, I found this one and I was wondering what everyone thinks the answer is:
Given the statement: “In Computer Graphics, everything is an approximation”,
explain what you understand by this, in terms of modelling and rendering.

We can't have 100% quality in our models or lighting because the computational power and memory required to store and render that data would be huge.
Just imagine simulating photon per photon-lighting in a scene with literally billions of particles, and how these photons would be colliding and absorbed by atoms and molecules - in real time. It would basically be impossible because of the power needed.

In addition to what #Tokfrans said, think about mesh structures they are all discrete versions of a surface in which the surface is approximated by a set of triangles(polygons) the curve is the same story as we use a poly line to represent it. Even if the curve is infinitely smooth, we usually render and represent it with a set of connected lines.

Related

Are there existing tools that raytrace triangle meshes?

Disclaimer: I'm not 100% on whether this is a well-formed question, so please feel free to comment and suggest improvements. I'll be actively looking out for ways to improve this question.
I have a triangle mesh, let's say the Stanford Bunny. Now, I want to raycast a ray from a source point in 3D along a 3D direction vector, and identify just the first intersection of that ray with the triangle mesh.
I already have a naive implementation cooked up. However, I'm looking for a more advanced implementation. In particular, I'll be casting many millions of rays in many directions, so I'm looking for a multi-threaded or GPU-accelerated implementation.
I have to believe that there must be some pretty complete projects online, as raycasting triangle meshes is a fundamental part of 3D computer graphics. However, I can't find anything beyond personal projects, which leads me to believe that I am using the wrong search terms, or something pretty simple along those lines.
I am looking for suggestions on existing tools that can raytrace polygonal meshes.
If all you need to do is find the distance to the mesh for millions of rays. Then it might be a good idea to look up CUDA raytracing tutorial online. This will show you how to cast many millions of rays. In most tutorials, raytracing is used to render to the screen with the camera matrix. However, this is not necessary. Simply adjust the rays starting parameters to what you need them to be such as 3D vector and position. Then output the data back to the CPU. Be weary of the bandwidth between the GPU and CPU sending millions of intersection points between the CPU and GPU can make the program run exceptionally slow.

Performance considerations of ECEF vs. Polar coordinates in a modern Earth scale simulation

I am sketching out a new simulation that will involve thousands of ships moving around on Earth's oceans and interacting over long periods of time. So, lots of "intersection detection" for sensor and communications ranges, as well as region detection for various environmental conditions. We'll assume a spherical earth, not WGS84. This is an event-step simulation that spits out metrics, not a real time game or anything like that.
A question is to use Cartesian coordinates (Earth-Centered, Earth-Fixed) or Geodic/polar coordinates. With polar coordinates a ship's track would be internally represented as a series of lat/lon waypoints with times and a great circle paths between them. With a Cartesian representation the waypoints would be connected with polyline renderings of the great circle between them.
The reason this is a question is I suspect that by sticking to a Cartesian data model it becomes possible to use various geometry libraries that are performance tuned, and even offer up SIMD/GPU performance advantages. The polar coordinates would probably be the more natural way to proceed if writing everything from scratch. But I suspect that by keeping things Cartesian I will have greater access to better and faster libraries. Is this an invalid line of thought? Another consideration is that I know polar coordinate calculations tend to get really screwy when near the poles.
Just curious if somebody with experience could save me a whole lot of time prototyping some scenarios both ways.
It often works well to represent directions as unit vectors instead of angles. Rotation of a vector by another angle becomes a 2x2 or 3x3 matmul (efficient with SIMD, but still more expensive than an FP add of two numbers in radians), but you very rarely need sin/cos.
You may occasionally want atan2 to get an angle, but usually not inside tight loops.
Intersection-detection can be very efficient (with SIMD) for XYZ coordinates given another XYZ + range. I'm not sure how efficiently you could check which lat/lon pairs were within range of a given point, not a problem I've looked at.
IDK what kind of stuff you'd find in existing libraries, or what you'd want to do with it.

How to structure Point Light Sources?

I am using Java to write a very primitive 3D graphics engine based on The Black Art of 3D Game Programming from 1995. I have gotten to the point where I can draw single color polygons to the screen and move the camera around the "scene". I even have a Z buffer that handles translucent objects properly by sorting those pixels by Z, as long as I don't show too many translucent pixels at once. I am at the point where I want to add lighting. I want to keep it simple, and ambient light seems simple enough, directional light should be fairly simple too. But I really want point lighting with the ability to move the light source around and cast very primitive shadows ( mostly I don't want light shining through walls ).
My problem is that I don't know the best way to approach this. I imagine a point light source casting rays at regular angles, and if these rays intersect a polygon it will light that polygon and stop moving forward. However when I think about a scene with multiple light sources and multiple polygons with all those rays I imagine it will get very slow. I also don't know how to handle a case where a polygon is far enough away from a light source that if falls in between two rays. I would give each light source a maximum distance, and if I gave it enough rays, then there should be no point within that distance that any two rays are too far apart to miss a polygon, but that only increases my problem with the number of calculations to perform.
My question to you is: Is there some trick to point light sources to speed them up or just to organize it better? I'm afraid I'll just get a nightmare of nested for loops. I can't use openGL or Direct3D or any other cheats because I want to write my own.
If you want to see my results so far, here is a youtube video. I have already fixed the bad camera rotation. http://www.youtube.com/watch?v=_XYj113Le58&feature=plcp
Lighting for real time 3d applications is (or rather - has in the past generally been) done by very simple approximations - see http://en.wikipedia.org/wiki/Shading. Shadows are expensive - and have generally in rasterizing 3d engines been accomplished via shadow maps & Shadow Volumes. Point lights make shadows even more expensive.
Dynamic real time light sources have only recently become a common feature in games - simply because they place such a heavy burden on the rendering system. And these games leverage dedicated graphics cards. So I think you may struggle to get good performance out of your engine if you decide to include dynamic - shadow casting - point lights.
Today it is commonplace for lighting to be applied in two ways:
Traditionally this has been "forward rendering". In this method, for every vertex (if you are doing the lighting per vertex) or fragment (if you are doing it per-pixel) you would calculate the contribution of each light source.
More recently, "deferred" lighting has become popular, wherein the geometry and extra data like normals & colour info are all rendered to intermediate buffers - which is then used to calculate lighting contributions. This way, the lighting calculations are not dependent on the geometry count. It does however, have a lot of other overhead.
There are a lot of options. Implementing anything much more complex than some the basic models that have been used by dedicated graphics cards over the past couple of years is going to be challenging, however!
My suggestion would be to start out with something simple - basic lighting without shadows. From there you can extend and optimize.
What are you doing the ray-triangle intersection test for? Are you trying to light only triangles which the light would reach? Ray-triangle
intersections for every light with every poly is going to be very expensive I think. For lighting without shadows, typically you would
just iterate through every face (or if you are doing it per vertex, through every vertex) and calculate & add the lighting contribution per light - you would do this just before you start rasterizing as you have to pass through all polys in anycase.
You can calculate the lighting by making use of any illumination model, something very simple like Lambertian reflectance - which shades the surface based upon the dot product of the normal of the surface and the direction vector from the surface to the light. Make sure your vectors are in the same spaces! This is possibly why you are getting the strange results that you are. If your surface normal is in world space, be sure to calculate the world space light vector. There are a bunch of advantages for calulating lighting in certain spaces, you can have a look at that later on, for now I suggest you just get the basics up and running. Also have a look at Blinn-phong - this is the shading model graphics cards used for many years.
For lighting with shadows - look into the links I posted. They were developed because realistic lighting is so expensive to calculate.
By the way, LaMothe had a follow up book called Tricks of the 3D Game Programming Gurus-Advanced 3D Graphics and Rasterization.
This takes you through every step of programming a 3d engine. I am not sure what the black art book covers.

Are there any rendering alternatives to rasterisation or ray tracing?

Rasterisation (triangles) and ray tracing are the only methods I've ever come across to render a 3D scene. Are there any others? Also, I'd love to know of any other really "out there" ways of doing 3D, such as not using polygons.
Aagh! These answers are very uninformed!
Of course, it doesn't help that the question is imprecise.
OK, "rendering" is a really wide topic. One issue within rendering is camera visibility or "hidden surface algorithms" -- figuring out what objects are seen in each pixel. There are various categorizations of visibility algorithms. That's probably what the poster was asking about (given that they thought of it as a dichotomy between "rasterization" and "ray tracing").
A classic (though now somewhat dated) categorization reference is Sutherland et al "A Characterization of Ten Hidden-Surface Algorithms", ACM Computer Surveys 1974. It's very outdated, but it's still excellent for providing a framework for thinking about how to categorize such algorithms.
One class of hidden surface algorithms involves "ray casting", which is computing the intersection of the line from the camera through each pixel with objects (which can have various representations, including triangles, algebraic surfaces, NURBS, etc.).
Other classes of hidden surface algorithms include "z-buffer", "scanline techniques", "list priority algorithms", and so on. They were pretty darned creative with algorithms back in the days when there weren't many compute cycles and not enough memory to store a z-buffer.
These days, both compute and memory are cheap, and so three techniques have pretty much won out: (1) dicing everything into triangles and using a z-buffer; (2) ray casting; (3) Reyes-like algorithms that uses an extended z-buffer to handle transparency and the like. Modern graphics cards do #1; high-end software rendering usually does #2 or #3 or a combination. Though various ray tracing hardware has been proposed, and sometimes built, but never caught on, and also modern GPUs are now programmable enough to actually ray trace, though at a severe speed disadvantage to their hard-coded rasterization techniques. Other more exotic algorithms have mostly fallen by the wayside over the years. (Although various sorting/splatting algorithms can be used for volume rendering or other special purposes.)
"Rasterizing" really just means "figuring out which pixels an object lies on." Convention dictates that it excludes ray tracing, but this is shaky. I suppose you could justify that rasterization answers "which pixels does this shape overlap" whereas ray tracing answers "which object is behind this pixel", if you see the difference.
Now then, hidden surface removal is not the only problem to be solved in the field of "rendering." Knowing what object is visible in each pixel is only a start; you also need to know what color it is, which means having some method of computing how light propagates around the scene. There are a whole bunch of techniques, usually broken down into dealing with shadows, reflections, and "global illumination" (that which bounces between objects, as opposed to coming directly from lights).
"Ray tracing" means applying the ray casting technique to also determine visibility for shadows, reflections, global illumination, etc. It's possible to use ray tracing for everything, or to use various rasterization methods for camera visibility and ray tracing for shadows, reflections, and GI. "Photon mapping" and "path tracing" are techniques for calculating certain kinds of light propagation (using ray tracing, so it's just wrong to say they are somehow fundamentally a different rendering technique). There are also global illumination techniques that don't use ray tracing, such as "radiosity" methods (which is a finite element approach to solving global light propagation, but in most parts of the field have fallen out of favor lately). But using radiosity or photon mapping for light propagation STILL requires you to make a final picture somehow, generally with one of the standard techniques (ray casting, z buffer/rasterization, etc.).
People who mention specific shape representations (NURBS, volumes, triangles) are also a little confused. This is an orthogonal problem to ray trace vs rasterization. For example, you can ray trace nurbs directly, or you can dice the nurbs into triangles and trace them. You can directly rasterize triangles into a z-buffer, but you can also directly rasterize high-order parametric surfaces in scanline order (c.f. Lane/Carpenter/etc CACM 1980).
There's a technique called photon mapping that is actually quite similar to ray tracing, but provides various advantages in complex scenes. In fact, it's the only method (at least of which I know) that provides truly realistic (i.e. all the laws of optics are obeyed) rendering if done properly. It's a technique that's used sparingly as far as I know, since it's performance is hugely worse than even ray tracing (given that it effectively does the opposite and simulates the paths taken by photons from the light sources to the camera) - yet this is it's only disadvantage. It's certainly an interesting algorithm, though you're not going to see it in widescale use until well after ray tracing (if ever).
The Rendering article on Wikipedia covers various techniques.
Intro paragraph:
Many rendering algorithms have been
researched, and software used for
rendering may employ a number of
different techniques to obtain a final
image.
Tracing every ray of light in a scene
is impractical and would take an
enormous amount of time. Even tracing
a portion large enough to produce an
image takes an inordinate amount of
time if the sampling is not
intelligently restricted.
Therefore, four loose families of
more-efficient light transport
modelling techniques have emerged:
rasterisation, including scanline
rendering, geometrically projects
objects in the scene to an image
plane, without advanced optical
effects; ray casting considers the
scene as observed from a specific
point-of-view, calculating the
observed image based only on geometry
and very basic optical laws of
reflection intensity, and perhaps
using Monte Carlo techniques to reduce
artifacts; radiosity uses finite
element mathematics to simulate
diffuse spreading of light from
surfaces; and ray tracing is similar
to ray casting, but employs more
advanced optical simulation, and
usually uses Monte Carlo techniques to
obtain more realistic results at a
speed that is often orders of
magnitude slower.
Most advanced software combines two or
more of the techniques to obtain
good-enough results at reasonable
cost.
Another distinction is between image
order algorithms, which iterate over
pixels of the image plane, and object
order algorithms, which iterate over
objects in the scene. Generally object
order is more efficient, as there are
usually fewer objects in a scene than
pixels.
From those descriptions, only radiosity seems different in concept to me.

3D laser scanner capturing normals?

The Lab university I work at is in the process of purchasing a laser scanner for scanning 3D objects. All along from the start we've been trying to find a scanner that is able to capture real RAW normals from the actual scanned surface. It seems that most scanners only capture points and then the software interpolates to find the normal of the approximate surface.
Does anybody know if there is actually such a thing as capturing raw normals? Is there a scanner that can do this and not interpolate the normals from the point data?
Highly unlikely. Laser scanning is done using ranges. What you want would be combining two entirely different techniques. Normals could be evaluated with higher precision using well controlled lighting etc, but requiring a very different kind of setup. Also consider the sampling problem: What good is a normal with higher resolution than your position data?
If you already know the bidirectional reflectance distribution function of the material that composes your 3D object, it is possible that you could use a gonioreflectometer to compare the measured BRDF at a point. You could then individually optimize a computed normal at that point by comparing a hypothetical BRDF against the actual measured value.
Admittedly, this would be a reasonably computationally-intensive task. However, if you are only going through this process fairly rarely, it might be feasible.
For further information, I would recommend that you speak with either Greg Ward (Larson) of Radiance fame or Peter Shirley at NVIDIA.
Here is an example article of using structured light to reconstruct normals from gradients.
Shape from 2D Edge Gradients
I didn't find the exact article I was looking for, but this seems to be on the same principle.
You can reconstruct normals from the angle and width of the stripe after being deformed on the object.
You could with a structured light + camera setup.
The normal would come from the angle betwen the projected line and the position on the image. As the other posters point out - you can't do it from a point laser scanner.
Capturing raw normals is almost always done using photometric stereo. This almost always requires placing some assumptions on the underlying reflectance, but even with somewhat inaccurate normals you can often do well when combining them with another source of data:
Really nice code for combining point clouds (from a laser scan for example) with surface normals: http://www.cs.princeton.edu/gfx/pubs/Nehab_2005_ECP/

Resources