Calculating surface fragments (continents) on an expanding sphere (Earth) - geometry

Let's suppose plate tectonics is wrong and our planet Earth has expanded over time, the continents being the remnants of a crust that once covered the entire surface of a smaller planet, the oceans floors being extruded mantle, a manifestation of expansion.
http://earthexpansion.blogspot.com/2011/02/earth-expansion-synoptic-simplicity.html
http://earthexpansion.blogspot.com/2011/02/synoptic-simplicity-again.html
It is difficult to visualize how everything would have looked like on a smaller Earth. Well, some people have made visualizations, but what about calculations?
Unfortunately, my experience is exclusively in Boring Business Brogramming. So when thinking about whether a development such as the Americas swiveling open to accommodate the Pacific is possible, I don't even know how to approach this problem.
Given today's Earth and geometrical data - a sphere (kind of), surface fragments (continents, polygons?) - would it be possible to retro-calculate the Earth to a smaller size, considering certain constraints (continents cannot be arbitrarily deformed) in order to find out whether a given movement story for expansion makes sense geometrically?
How would you go about doing it? Concepts, approaches? Tools, techniques? Sources for geo data?
Update
An important factor to take into account in reverse engineering the Earth to a former continental configuration on a smaller planet is the age of the ocean floors as measured by the U.S. Navy and others. As you can see on the map, the ocean floor is very young everywhere as compared to the continental crust. Moving back in time and deflating the planet, the youngest ocean floor stretches (red, at the spreading ridges) will have to be removed first because we know they weren't there.
So these measurements significantly constrain the problem of surface reconfiguration on a shrinking sphere. Still, I'm clueless as to the geometry to use in such a problem.

From how I understand your problem, the surface areas of all today's continents would be the same in any given time, while the total surface area of Earth would shrink as you move back in time. Right?
So the question is whether the continents can be placed on the Earth so that their total surface area does not exceed the total surface area of Earth for that moment in time. And there are some constraints to this problem: you cannot make the continents jump from one hemisphere to the other in a short period of time, for example. So you'd have to take the incremental movement of continents into account.
Tricky one. The closest problem that's related to this is probably Algorithm for fitting 2D polygons in an area? But the additional possibilities (like breaking a continent into several pieces, if that's allowed by that theory) makes it a pretty hard one.

Related

Performance considerations of ECEF vs. Polar coordinates in a modern Earth scale simulation

I am sketching out a new simulation that will involve thousands of ships moving around on Earth's oceans and interacting over long periods of time. So, lots of "intersection detection" for sensor and communications ranges, as well as region detection for various environmental conditions. We'll assume a spherical earth, not WGS84. This is an event-step simulation that spits out metrics, not a real time game or anything like that.
A question is to use Cartesian coordinates (Earth-Centered, Earth-Fixed) or Geodic/polar coordinates. With polar coordinates a ship's track would be internally represented as a series of lat/lon waypoints with times and a great circle paths between them. With a Cartesian representation the waypoints would be connected with polyline renderings of the great circle between them.
The reason this is a question is I suspect that by sticking to a Cartesian data model it becomes possible to use various geometry libraries that are performance tuned, and even offer up SIMD/GPU performance advantages. The polar coordinates would probably be the more natural way to proceed if writing everything from scratch. But I suspect that by keeping things Cartesian I will have greater access to better and faster libraries. Is this an invalid line of thought? Another consideration is that I know polar coordinate calculations tend to get really screwy when near the poles.
Just curious if somebody with experience could save me a whole lot of time prototyping some scenarios both ways.
It often works well to represent directions as unit vectors instead of angles. Rotation of a vector by another angle becomes a 2x2 or 3x3 matmul (efficient with SIMD, but still more expensive than an FP add of two numbers in radians), but you very rarely need sin/cos.
You may occasionally want atan2 to get an angle, but usually not inside tight loops.
Intersection-detection can be very efficient (with SIMD) for XYZ coordinates given another XYZ + range. I'm not sure how efficiently you could check which lat/lon pairs were within range of a given point, not a problem I've looked at.
IDK what kind of stuff you'd find in existing libraries, or what you'd want to do with it.

What units should I use for lighting in a 3D rendering engine?

In reading academic papers on rendering, graphics processing, lighting, etc..., I am finding a wide variety of units mentioned and used.
For example, Bruneton's Atmospheric Scattering paper seems to use candellas per square meter (cd/m^2), representing luminance. However, Jensen's Night Model uses watts per square meter (W/m^2), implying irradiance. Other papers mention irradiance, luminance, illuminance, etc., with seemingly no common representation of the lighting calculations used. How then, can one even be sure that in implementing all of these papers, that the calculations will "play well" together?
To add to the confusion, most papers on adaptive tonemapping seem to forego units at all, merely recommending that pixels (referred to as luminance) be rendered in a log format (decibels). A decibel is useless without a reference intensity/power.
Which begs the question, what unit does a single pixel represent? When I calculate the "average luminance" of a scene by averaging the log-brightness of the pixels, what exactly am I calculating? The term "luminance" itself implies an area being illuminated and a solid angle for the source. This leads to two more questions: "What is the solid angle of the point source?" "What is the area of a pixel?"
My question is thus,
What units should lighting in a 3d graphics engine be represented in to allow for proper, calibrated brightness control across a wide variety of light sources, from faint starlight to bright sunlight, and how does this unit relate to the brightness of individual pixels?
Briefly: radiance, measured in candela per square meter (cd/m^2) is the appropriate unit.
Less briefly: computer graphics is usually concerned with what things should look like to people. The units that describe this are:
"luminous flux" is measured in lumens, or lm, which are defined proportional to the total radiated power (in watts) of light at a particular wavelength.
"luminous intensity" is measured in candela, or cd, which can be defined as lumens per steradian (lm/sr).
Intuitively, when you spread the same amount of energy over a larger area, it becomes proportionately less bright. This yields two more quantities:
"irradiance" is the luminous flux per unit area. It is measured in lm/m^2, and is proportional to W/m^2.
"radiance" is the luminous intensity per unit area. It is measured in cd/m^2, or lm/(sr.m^2).
Now, to answer your questions:
Each pixel represents a finite amount of solid angle from the camera, which is measured in steradian. In the context of your question, the relevant area is the area of the object being rendered.
The radiance (measured in cd/m^2) represents surface brightness, and has the unique property that it is invariant along any unobstructed path of observation (which makes it the most appropriate quantity for a rendering engine). The color of each pixel represents the average radiance over the solid angle occupied by that pixel.
Note that, by definition, a point source doesn't occupy any solid angle; its radiance is technically infinite. Its irradiance is finite, though, and technically it should only contribute a finite (if potentially large) effect to a given pixel value. In any case, if you want to directly render point sources, you will need to treat them differently from area sources, and deal with the problem that quantities other than radiance are not invariant over a given ray...
When Jensen et al's paper "A Physically-Based Night Sky Model" uses an irradiance-related W/m^2 in a table of various sources of illumination, I would guess that their intention was to describe their relative contribution as averaged over the entire night sky, as abstracted from any visual details.
Finally, note that truly physically based models need to integrate over the observable spectrum in order to evaluate brightness and color. Thus, such models must start out with a distribution of watts over visible wavelengths, and use the standard human colorimetric model to evaluate lumens.
The SI unit for brightness is the Candela per square metre so if your wanting to represent actual physical quantities it would be hard to argue against using that. As for how this unit relates to the brightness of an individual pixel that would be a function of the brightness at that part of the illumination source represented in the pixels viewing area combined with contributions from elsewhere in the scene as calculated by the engine - presumably this would very completely depending on the renderer.

How heavy is hardware tessellation?

If tessellation gives a bonus over just using high-poly models,then why do modern 2012 games still use gigantic models that take a lot of hard disk space instead of tessellating it all and just adjusting the tessellation factor to depend on distance from camera,creating a nice level of detail.
You can't get back detail by tessellation that was not there in the first place. It just means those models would be even bigger without it being available.
In its most basic form, tessellation is a method of breaking down polygons into finer pieces. For example, if you take a square and cut it across its diagonal, you’ve “tessellated” this square into two triangles. By itself, tessellation does little to improve realism. For example, in a game, it doesn’t really matter if a square is rendered as two triangles or two thousand triangles—tessellation only improves realism if the new triangles are put to use in depicting new information.
When a displacement map (left) is applied to a flat surface, the
resulting surface (right) expresses the height information encoded in
the displacement map. The simplest and most popular way of putting the
new triangles to use is a technique called displacement mapping. A
displacement map is a texture that stores height information. When
applied to a surface, it allows vertices on the surface to be shifted
up or down based on the height information. For example, the graphics
artist can take a slab of marble and shift the vertices to form a
carving. Another popular technique is to apply displacement maps over
terrain to carve out craters, canyons, and peaks
http://www.nvidia.com/object/tessellation.html
I think the reason why nobody uses hardware tessellation in games is, that ca. 60% of all game player are console player and aslong the console doesnt support shadermodel5, there is no reason to do games that uses hardware tessellation. Even if they do, they may be have to do a game in dx9 and dx11 because it is not really good downward compatible... but maybe there is an other reason to!
With the new consoles comming out this year, maybe HW Tessellation gets an other change ;)

How to structure Point Light Sources?

I am using Java to write a very primitive 3D graphics engine based on The Black Art of 3D Game Programming from 1995. I have gotten to the point where I can draw single color polygons to the screen and move the camera around the "scene". I even have a Z buffer that handles translucent objects properly by sorting those pixels by Z, as long as I don't show too many translucent pixels at once. I am at the point where I want to add lighting. I want to keep it simple, and ambient light seems simple enough, directional light should be fairly simple too. But I really want point lighting with the ability to move the light source around and cast very primitive shadows ( mostly I don't want light shining through walls ).
My problem is that I don't know the best way to approach this. I imagine a point light source casting rays at regular angles, and if these rays intersect a polygon it will light that polygon and stop moving forward. However when I think about a scene with multiple light sources and multiple polygons with all those rays I imagine it will get very slow. I also don't know how to handle a case where a polygon is far enough away from a light source that if falls in between two rays. I would give each light source a maximum distance, and if I gave it enough rays, then there should be no point within that distance that any two rays are too far apart to miss a polygon, but that only increases my problem with the number of calculations to perform.
My question to you is: Is there some trick to point light sources to speed them up or just to organize it better? I'm afraid I'll just get a nightmare of nested for loops. I can't use openGL or Direct3D or any other cheats because I want to write my own.
If you want to see my results so far, here is a youtube video. I have already fixed the bad camera rotation. http://www.youtube.com/watch?v=_XYj113Le58&feature=plcp
Lighting for real time 3d applications is (or rather - has in the past generally been) done by very simple approximations - see http://en.wikipedia.org/wiki/Shading. Shadows are expensive - and have generally in rasterizing 3d engines been accomplished via shadow maps & Shadow Volumes. Point lights make shadows even more expensive.
Dynamic real time light sources have only recently become a common feature in games - simply because they place such a heavy burden on the rendering system. And these games leverage dedicated graphics cards. So I think you may struggle to get good performance out of your engine if you decide to include dynamic - shadow casting - point lights.
Today it is commonplace for lighting to be applied in two ways:
Traditionally this has been "forward rendering". In this method, for every vertex (if you are doing the lighting per vertex) or fragment (if you are doing it per-pixel) you would calculate the contribution of each light source.
More recently, "deferred" lighting has become popular, wherein the geometry and extra data like normals & colour info are all rendered to intermediate buffers - which is then used to calculate lighting contributions. This way, the lighting calculations are not dependent on the geometry count. It does however, have a lot of other overhead.
There are a lot of options. Implementing anything much more complex than some the basic models that have been used by dedicated graphics cards over the past couple of years is going to be challenging, however!
My suggestion would be to start out with something simple - basic lighting without shadows. From there you can extend and optimize.
What are you doing the ray-triangle intersection test for? Are you trying to light only triangles which the light would reach? Ray-triangle
intersections for every light with every poly is going to be very expensive I think. For lighting without shadows, typically you would
just iterate through every face (or if you are doing it per vertex, through every vertex) and calculate & add the lighting contribution per light - you would do this just before you start rasterizing as you have to pass through all polys in anycase.
You can calculate the lighting by making use of any illumination model, something very simple like Lambertian reflectance - which shades the surface based upon the dot product of the normal of the surface and the direction vector from the surface to the light. Make sure your vectors are in the same spaces! This is possibly why you are getting the strange results that you are. If your surface normal is in world space, be sure to calculate the world space light vector. There are a bunch of advantages for calulating lighting in certain spaces, you can have a look at that later on, for now I suggest you just get the basics up and running. Also have a look at Blinn-phong - this is the shading model graphics cards used for many years.
For lighting with shadows - look into the links I posted. They were developed because realistic lighting is so expensive to calculate.
By the way, LaMothe had a follow up book called Tricks of the 3D Game Programming Gurus-Advanced 3D Graphics and Rasterization.
This takes you through every step of programming a 3d engine. I am not sure what the black art book covers.

How to construct ground surface of infinite size in a 3D CAD application?

I am trying to create an application similar in UI to Sketchup. As a first step, I need to display the ground surface stretching out in all directions. What is the best way to do this?
Options:
Create a sufficiently large regular polygon stretching out in all directions from the origin. Here there is a possibility of the user hitting the edges and falling off the surface of the earth.
Model the surface of the earth as a sphere/spheroid. Here I will be limiting my vertex co-ordinates to very large values prone to rounding off errors. (Radius of earth is 6371000000 millimeter).
Same as 1 but dynamically extend the ends of the earth as the user gets close to them.
What is the usual practice?
I guess you would do neither of these, but instead use a virtual ground.
So you just find out, what portion of the ground is visible in the viewport and then create a plane large enough to fill that. With some reasonable maxiumum, which simulates the end of the line of sight aka horizon as we know it.

Resources