Raytracing the 'sunshape' - graphics

This is based on the question I asked here, but I think I might have asked the question in the wrong way. This is my problem:
I am writing a scientific ray tracer. I.e. not for graphics although the concepts are identical.
I am firing rays from a horizontal plane toward a parabolic dish with a focus distance of 100m (and perfect specular reflection). I have a Target at the focal point of the dish. The rays are not fired perpendicularly from the plane but are perturbed by a certain angle to emulate the fact that the sun is not a point source but a disc in the sky.
However, the flux coming form the sun is not radially constant across the sun disc. Its hotter in the middle than at the edges. If you have ever looked at the sun on a hazy day you'll see a ring around the sun.
Because of the parabolic dish, the reflected image on the Target should be the image of the sun. i.e. It should be brighter (hotter, more flux) in the middle than at the edges. This is given by a graph with Intensity Vs. Radial distance from the center
There is two ways I can simulate this.
Firstly: Uniform Sampling: Each rays is shot out from the with a equal (uniform) probability of taking an angle between zero and the size of the sun disk. I then scale the flux carried by the ray according to the corresponding flux value at that angle.
Secondly: Arbitrarily Sampling: Each rays is shot out from the plane according to the distribution of the Intensity Vs. Radial Distance. Therefore there will be less rays toward the outer edges than rays within the centre. This, to me seems far more efficient. But I can not get it to work. Any suggenstions?
This is what I have done:
Uniformly
phi = 2*pi*X_1
alpha = arccos (1-(1-cos(theta))*X_2)
x = sin(alpha)*cos(phi)
y = sin(alpha)*sin*phi
z = -cos(alpha)
Where X is a uniform random number and theta is a the subtend angle of the Solar Disk.
Arbitarily Sampling
alpha = arccos (1-(1-cos(theta)) B1)
Where B is a random number generated from an arbiatry distribution using the algorithm on pg 27 here.
I am desperate to sort this out.

your function drops to zero and since the sun is not a smooth surfaced object, that is probably wrong. Chances are there are photons emitting at all parts of the sun in all directions.
But: what is your actual QUESTION?

You are looking for Monte-Carlo integration.
The key idea is: although you will sample less rays outside of the disc, you will weight these rays more and they will contribute to the sum with a higher importance.
While with a uniform sampling, you just sum your intensity values, with a non uniform sampling, you divide each intensity by the value of the probability distribution of the rays that are shot (e.g., for a uniform distribution, this value is a constant and doesn't change anything).

Related

Raytracing and Computer Graphics. Color perception functions

Summary
This is a question about how to map light intensity values, as calculated in a raytracing model, to color values percieved by humans. I have built a ray tracing model, and found that including the inverse square law for calculation of light intensities produces graphical results which I believe are unintuitive. I think this is partly to do with the limited range of brightness values available with 8 bit color images, but more likely that I should not be using a linear map between light intensity and pixel color.
Background
I developed a recent interest in creating computer graphics with raytracing techniques.
A basic raytracing model might work something like this
Calculate ray vectors from the center of the camera (eye) in the direction of each screen pixel to be rendered
Perform vector collision tests with all objects in the world
If collision, make a record of the color of the object at the point where the collision occurs
Create a new vector from the collision point to the nearest light
Multiply the color of the light by the color of the object
This creates reasonable, but flat looking images, even when surface normals are included in the calculation.
Model Extensions
My interest was in trying to extend this model by including the distance into the light calculations.
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
It doesn't apply to all light models. (For example light arriving from an infinite distance has intensity independent of the position of an object.)
From playing around with my code I have found that this inverse square law doesn't produce the realistic lighting I was hoping for.
For example, I built some initial objects for a model of a room/scene, to test things out.
There are some objects at a distance of 3-5 from the camera.
There are walls which make a boundry for the room, and I have placed them with distance of order 10 to 100 from the camera.
There are some lights, distance of order 10 from the camera.
What I have found is this
If the boundry of the room is more than distance 10 from the camera, the color values are very dim.
If the boundry of the room is a distance 100 from the camera it is completely invisible.
This doesn't match up with what I would expect intuitively. It makes sense mathematically, as I am using a linear function to translate between color intensity and RGB pixel values.
Discussion
Moving an object from a distance 10 to a distance 100 reduces the color intensity by a factor of (100/10)^2 = 100. Since pixel RGB colors are in the range of 0 - 255, clearly a factor of 100 is significant and would explain why an object at distance 10 moved to distance 100 becomes completely invisible.
However, I suspect that the human perception of color is non-linear in some way, and I assume this is a problem which has already been solved in computer graphics. (Otherwise raytracing engines wouldn't work.)
My guess would be there is some kind of color perception function which describes how absolute light intensities should be mapped to human perception of light intensity / color.
Does anyone know anything about this problem or can point me in the right direction?
If an object is lit by a light at distance d, then if the object is moved a distance 2d from the light source the illumination intensity is reduced by a factor of 4. This is the inverse square law.
The physical quantity you're describing here is not intensity, but radiant flux. For a discussion of radiometric concepts in the context of ray tracing, see Chapter 5.4 of Physically Based Rendering.
If the boundary of the room is more than distance 10 from the camera, the color values are very dim.
If the boundary of the room is a distance 100 from the camera it is completely invisible.
The inverse square law can be a useful first approximation for point lights in a ray tracer (before more accurate lighting models are implemented). The key point is that the law - radiant flux falling off by the square of the distance - applies only to the light from the point source to a surface, not to the light that's then reflected from the surface to the camera.
In other words, moving the camera back from the scene shouldn't reduce the brightness of the objects in the rendered image; it should only reduce their size.

Fitting a transition + circle + transition curve to a set of measured points

I am dealing with a reverse-engineering problem regarding road geometry and estimation of design conditions.
Suppose you have a set of points obtained from the measurement of positions of a road. This road has straight sections as well as curve sections. Straight sections are, of course, represented by lines, and curves are represented by circles of unknown center and radius. There are, as well, transition sections, which may be clothoids / Euler spirals or any other usual track transition curve. A representation of the track may look like this:
We know in advance that the road / track was designed taking this transition + circle + transition principle into account for every curve, yet we only have the measurement points, and the goal is to find the parameters describing every curve on the track, this is, the transition parameters as well as the circle's center and radius.
I have written some code using a nonlinear optimization algorithm, where a user can select start and end points and fit a circle that to the arc section between them, as it shows in next figure:
However, I don't find a suitable way to take the transition into account. After giving it some thought I came to think that this s because, given a set of discrete points -with their measurement error- representing a full curve, it is not entirely clear where to consider it "begins" and where it "ends" and, moreover, it is less clear where to consider the transition, the proper circle and the exit transition "begin" and "end".
Is there any work on this subject which I may have missed? is there a proper way to fit the whole transition + curve + transition structure into the set of points?
As far as I know, there's no method to fit a sequence clothoid1-circle-clothoid2 into a given set of points.
Basic facts are that two points define a straight, and three points define a unique circle.
The clothoid is far more complex, because you need: The parameter A, the final radius Rf, an initial point px,py, the radius Ri at that point, and the tangent T (angle with X-axis) at that point.
These are 5 data you may use to find the solution.
Due to clothoid coords are calculated by expanded Fresnel integrals (see https://math.stackexchange.com/a/3359006/688039 a little explanation), and then apply a translation & rotation, there's no an easy way to fit this spiral into a set of given points.
When I've had to deal with your issue, what I've done is:
Calculate the radius for triplets of consecutive points: p1p2p3, p2p3p4, p3p4p5, etc
Observe the sequence of radius. Similar values mean a circle, increasing/decreasing values mean a clothoid; Big values would mean a straight.
For each basic element (line, circle) find the most probably characteristics (angles, vertices, radius) by hand or by some regression method. Many times the common sense is the best.
For a spiral you may start with aproximated values, taken from the adjacent elements. These values may very well be the initial angle and point, and the initial and final radius. Then you need to iterate, playing with Fresnel and 'space change' until you find a "good" parameter A. Then repeat with small differences in the other values, those you took from adjacents.
Make the changes you consider as good. For example, many values (A, radius) use to be integers, without decimals, just because it was easier for the designer to type.
If you can make a small applet to do these steps then it's enough. Using a typical roads software helps, but doesn't avoid you the iteration process.
If the points are dense compared to the effective radii of curvature, estimate the local curvature by least square fitting of a circle on a small number of points, taking into account that the curvature is most of the time zero.
You will obtain a plot with constant values and ramps that connect them. You can use an estimate of the slope at the inflection points to figure out the transition points.

Geometrical Modeling of a Rope

Assume a Rope of a given length and a given stiffness (that means a minimum bending radius). Both ends are fixed at a given point in a given direction (angle) on a plane e.g. with some clamps. The rope is loose and lays in one ore more loops. It has to lay flat on the plane. No three-dimensional loops are allowed. It can lay in many different configurations depending on how loose the rope is, see image (sorry for my poor drawing).
I'm interested in the area of the plane that can be occupied by the rope (red on the image).
How can I model that in order to calculate that area?
The constraints are:
Length of the rope
minimum bending radius
Coordinates and angles of both ends of the rope
the rope has to lay flat on the plane (no 3D-loops, just 2d)
Hint:
My intuition tells me that the two extreme configurations will be such that the minimum curvature will be achieved at both endpoints on a certain length, and in between a circular arc of a higher radius, i.e. three arcs, with G1 continuity (a G1 discontinuity would be like a null radius).
You can construct them by drawing two circles of the minimum radius, tangent to the directions at the endpoints. Then the third circle will be tangent to these, but with a radius such that the sum of the arcs equals the rope length. The contact points will be symmetrical with respect to the mediatrix of the two endpoints, so that you can compute the unknown radius as a function of a single angle, and solve for the known total length.

Uniform spatial bins on surface of a sphere

Is there a spatial lookup grid or binning system that works on the surface of a (3D) sphere? I have the requirements that
The bins must be uniform (so you can look up in constant time if there exists a point r distance away from any spot on the sphere, given constant r.)†
The number of bins must be at most linear with the surface area of the sphere. (Alternatively, increasing the surface resolution of the grid shouldn’t make it grow faster than the area it maps.)
I’ve already considered
Spherical coordinates: not good because the cells created are extremely nonuniform making it useless for proximity testing.
Cube meshes: Less distortion than spherical coordinates, but still very difficult to determine which cells to search for a given query.
3D voxel binning: Wastes the entire interior volume of the sphere with empty bins that will never be used (as well as the empty bins at the 6 corners of the bounding cube). Space requirements grow with O(n sqrt(n)) with increasing sphere surface area.
kd-Trees: perform poorly in 3D and are technically logarithmic complexity, not constant per query.
My best idea for a solution involves using the 3D voxel binning method, but somehow excluding the voxels that the sphere will never intersect. However I have no idea how to determine which voxels to exclude, nor how to calculate an index into such a structure given a query location on the sphere.
† For what it’s worth the points have a minimum spacing so a good grid really would guarantee constant lookup.
My suggestion would be a variant of the spherical coordinates, such that the polar angle is not sampled uniformly but instead the sine of this angle is sampled uniformly. This way, the element of area sinφ dφ dΘ is kept constant, leading to tiles of the same area (though variable aspect ratio).
At the poles, merge all tiles in a single disk-like polygon.
Another possibility is to project a regular icosahedron onto the sphere and to triangulate the spherical triangles so obtained. This takes a little of spherical trigonometry.
I had a similar problem and used "sparse" 3D voxel binning. Basically, my spatial index is a hash map from (x, y, z) coordinates to bins.
Because I also had a minimum distance constraint on my points, I chose the bin size such that a bin can contain at most one point. This is accomplished if the edge of the (cubic) bins is at most d / sqrt(3), where d is the minimum separation of two points on the sphere. The advantage is that you can represent a full bin as a single point, and an empty bin can just be absent from the hash map.
My only query was for points within a radius d (the same d), which then requires scanning the surrounding 125 bins (a 5×5×5 cube). You could technically leave off the 8 corners to get this down to 117, but I didn't bother.
An alternative for the bin size is to optimize it for queries rather than storage size and simplicity, and choose it such that you always have to scan at most 27 bins (a 3×3×3 cube). That would require a bin edge length of d. I think (but haven't thought hard about it) that a bin could contain up to 4 points in that case. You could represent these with a fixed-size array to save one pointer indirection.
In either case, the memory usage of your spatial index will be O(n) for n points, so it doesn't get any better than that.

Find the closest point on a curve to a given point

The curve is in fact the trajectory of a bus, the curve is represented by many (up to a few thousand) discrete points on the curve (the points were recorded by a GPS device installed on the bus).
Input a point P, I need to find the closest point on the curve to the point P. The point P is usually no more than 30m away from the trajectory of the bus. Note, the closest point isn't necessary a point recorded by the GPS device, it could be a point somewhere between two recorded points.
First I need an algorithm to recover the trajectory from those recorded points. It would be great if the interpolated curve could show sharp turns made by the bus. Which curve is best for such task ? Is Bezier curve good enough ? And finally I have to calculate the closest point on the curve, of course the algorithm completely depends on the kind of curve chosen.
I'm doing some research, and don't have much knowledge in curve interpolation, so any suggestions are welcome.
For computing the trajectory from recorded points, I recommended using the centripetal or chord-length Catmull-Rom splines. See link for more details. Catmll-Rom splines are in fact special cubic Hermite curves, which can be easily converted into cubic Bezier curves. Please note that the result from Catmull-Rom spline is a G1 curve only in general. If you want the trajectory to be with higher continuity (such as C2), you can go with natural cubic splines or general B-spline interpolation. Whatever approach you take, it is advised to keep the spline's degree no higher than 5. Degree 3 is a popular choice.
Once you have the mathematical representation for the trajectory, you can compute the minimum distance between a given point P and the trajectory. In general, the squared distance between point P and a curve C(t) is represented as D(t) = |P-C(t)|^2. The minimum of D will happen at where its first derivative is zero, which means we have to find the root for the following equation:
dD/dt = 2*(P-C(t)).C'(t) =0
When C(t) is of degree 3, dD/dt will be of degree 5. This is the reason why it is recommended to use a low degree curve earlier.
There are many literatures or online materials talking about how to find the root of a polynomial (of any degree) efficiently and robustly. Here is another SO post that might be useful.

Resources