Ball to Ball Collision - Detection and Handling - graphics

With the help of the Stack Overflow community I've written a pretty basic-but fun physics simulator.
You click and drag the mouse to launch a ball. It will bounce around and eventually stop on the "floor".
My next big feature I want to add in is ball to ball collision. The ball's movement is broken up into a x and y speed vector. I have gravity (small reduction of the y vector each step), I have friction (small reduction of both vectors each collision with a wall). The balls honestly move around in a surprisingly realistic way.
I guess my question has two parts:
What is the best method to detect ball to ball collision?
Do I just have an O(n^2) loop that iterates over each ball and checks every other ball to see if it's radius overlaps?
What equations do I use to handle the ball to ball collisions? Physics 101
How does it effect the two balls speed x/y vectors? What is the resulting direction the two balls head off in? How do I apply this to each ball?
Handling the collision detection of the "walls" and the resulting vector changes were easy but I see more complications with ball-ball collisions. With walls I simply had to take the negative of the appropriate x or y vector and off it would go in the correct direction. With balls I don't think it is that way.
Some quick clarifications: for simplicity I'm ok with a perfectly elastic collision for now, also all my balls have the same mass right now, but I might change that in the future.
Edit: Resources I have found useful
2d Ball physics with vectors: 2-Dimensional Collisions Without Trigonometry.pdf
2d Ball collision detection example: Adding Collision Detection
Success!
I have the ball collision detection and response working great!
Relevant code:
Collision Detection:
for (int i = 0; i < ballCount; i++)
{
for (int j = i + 1; j < ballCount; j++)
{
if (balls[i].colliding(balls[j]))
{
balls[i].resolveCollision(balls[j]);
}
}
}
This will check for collisions between every ball but skip redundant checks (if you have to check if ball 1 collides with ball 2 then you don't need to check if ball 2 collides with ball 1. Also, it skips checking for collisions with itself).
Then, in my ball class I have my colliding() and resolveCollision() methods:
public boolean colliding(Ball ball)
{
float xd = position.getX() - ball.position.getX();
float yd = position.getY() - ball.position.getY();
float sumRadius = getRadius() + ball.getRadius();
float sqrRadius = sumRadius * sumRadius;
float distSqr = (xd * xd) + (yd * yd);
if (distSqr <= sqrRadius)
{
return true;
}
return false;
}
public void resolveCollision(Ball ball)
{
// get the mtd
Vector2d delta = (position.subtract(ball.position));
float d = delta.getLength();
// minimum translation distance to push balls apart after intersecting
Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d);
// resolve intersection --
// inverse mass quantities
float im1 = 1 / getMass();
float im2 = 1 / ball.getMass();
// push-pull them apart based off their mass
position = position.add(mtd.multiply(im1 / (im1 + im2)));
ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2)));
// impact speed
Vector2d v = (this.velocity.subtract(ball.velocity));
float vn = v.dot(mtd.normalize());
// sphere intersecting but moving away from each other already
if (vn > 0.0f) return;
// collision impulse
float i = (-(1.0f + Constants.restitution) * vn) / (im1 + im2);
Vector2d impulse = mtd.normalize().multiply(i);
// change in momentum
this.velocity = this.velocity.add(impulse.multiply(im1));
ball.velocity = ball.velocity.subtract(impulse.multiply(im2));
}
Source Code: Complete source for ball to ball collider.
If anyone has some suggestions for how to improve this basic physics simulator let me know! One thing I have yet to add is angular momentum so the balls will roll more realistically. Any other suggestions? Leave a comment!

To detect whether two balls collide, just check whether the distance between their centers is less than two times the radius. To do a perfectly elastic collision between the balls, you only need to worry about the component of the velocity that is in the direction of the collision. The other component (tangent to the collision) will stay the same for both balls. You can get the collision components by creating a unit vector pointing in the direction from one ball to the other, then taking the dot product with the velocity vectors of the balls. You can then plug these components into a 1D perfectly elastic collision equation.
Wikipedia has a pretty good summary of the whole process. For balls of any mass, the new velocities can be calculated using the equations (where v1 and v2 are the velocities after the collision, and u1, u2 are from before):
If the balls have the same mass then the velocities are simply switched. Here's some code I wrote which does something similar:
void Simulation::collide(Storage::Iterator a, Storage::Iterator b)
{
// Check whether there actually was a collision
if (a == b)
return;
Vector collision = a.position() - b.position();
double distance = collision.length();
if (distance == 0.0) { // hack to avoid div by zero
collision = Vector(1.0, 0.0);
distance = 1.0;
}
if (distance > 1.0)
return;
// Get the components of the velocity vectors which are parallel to the collision.
// The perpendicular component remains the same for both fish
collision = collision / distance;
double aci = a.velocity().dot(collision);
double bci = b.velocity().dot(collision);
// Solve for the new velocities using the 1-dimensional elastic collision equations.
// Turns out it's really simple when the masses are the same.
double acf = bci;
double bcf = aci;
// Replace the collision velocity components with the new ones
a.velocity() += (acf - aci) * collision;
b.velocity() += (bcf - bci) * collision;
}
As for efficiency, Ryan Fox is right, you should consider dividing up the region into sections, then doing collision detection within each section. Keep in mind that balls can collide with other balls on the boundaries of a section, so this may make your code much more complicated. Efficiency probably won't matter until you have several hundred balls though. For bonus points, you can run each section on a different core, or split up the processing of collisions within each section.

Well, years ago I made the program like you presented here.
There is one hidden problem (or many, depends on point of view):
If the speed of the ball is too
high, you can miss the collision.
And also, almost in 100% cases your new speeds will be wrong. Well, not speeds, but positions. You have to calculate new speeds precisely in the correct place. Otherwise you just shift balls on some small "error" amount, which is available from the previous discrete step.
The solution is obvious: you have to split the timestep so, that first you shift to correct place, then collide, then shift for the rest of the time you have.

You should use space partitioning to solve this problem.
Read up on
Binary Space Partitioning
and
Quadtrees

As a clarification to the suggestion by Ryan Fox to split the screen into regions, and only checking for collisions within regions...
e.g. split the play area up into a grid of squares (which will will arbitrarily say are of 1 unit length per side), and check for collisions within each grid square.
That's absolutely the correct solution. The only problem with it (as another poster pointed out) is that collisions across boundaries are a problem.
The solution to this is to overlay a second grid at a 0.5 unit vertical and horizontal offset to the first one.
Then, any collisions that would be across boundaries in the first grid (and hence not detected) will be within grid squares in the second grid. As long as you keep track of the collisions you've already handled (as there is likely to be some overlap) you don't have to worry about handling edge cases. All collisions will be within a grid square on one of the grids.

A good way of reducing the number of collision checks is to split the screen into different sections. You then only compare each ball to the balls in the same section.

One thing I see here to optimize.
While I do agree that the balls hit when the distance is the sum of their radii one should never actually calculate this distance! Rather, calculate it's square and work with it that way. There's no reason for that expensive square root operation.
Also, once you have found a collision you have to continue to evaluate collisions until no more remain. The problem is that the first one might cause others that have to be resolved before you get an accurate picture. Consider what happens if the ball hits a ball at the edge? The second ball hits the edge and immediately rebounds into the first ball. If you bang into a pile of balls in the corner you could have quite a few collisions that have to be resolved before you can iterate the next cycle.
As for the O(n^2), all you can do is minimize the cost of rejecting ones that miss:
1) A ball that is not moving can't hit anything. If there are a reasonable number of balls lying around on the floor this could save a lot of tests. (Note that you must still check if something hit the stationary ball.)
2) Something that might be worth doing: Divide the screen into a number of zones but the lines should be fuzzy--balls at the edge of a zone are listed as being in all the relevant (could be 4) zones. I would use a 4x4 grid, store the zones as bits. If an AND of the zones of two balls zones returns zero, end of test.
3) As I mentioned, don't do the square root.

I found an excellent page with information on collision detection and response in 2D.
http://www.metanetsoftware.com/technique.html (web.archive.org)
They try to explain how it's done from an academic point of view. They start with the simple object-to-object collision detection, and move on to collision response and how to scale it up.
Edit: Updated link

You have two easy ways to do this. Jay has covered the accurate way of checking from the center of the ball.
The easier way is to use a rectangle bounding box, set the size of your box to be 80% the size of the ball, and you'll simulate collision pretty well.
Add a method to your ball class:
public Rectangle getBoundingRect()
{
int ballHeight = (int)Ball.Height * 0.80f;
int ballWidth = (int)Ball.Width * 0.80f;
int x = Ball.X - ballWidth / 2;
int y = Ball.Y - ballHeight / 2;
return new Rectangle(x,y,ballHeight,ballWidth);
}
Then, in your loop:
// Checks every ball against every other ball.
// For best results, split it into quadrants like Ryan suggested.
// I didn't do that for simplicity here.
for (int i = 0; i < balls.count; i++)
{
Rectangle r1 = balls[i].getBoundingRect();
for (int k = 0; k < balls.count; k++)
{
if (balls[i] != balls[k])
{
Rectangle r2 = balls[k].getBoundingRect();
if (r1.Intersects(r2))
{
// balls[i] collided with balls[k]
}
}
}
}

I see it hinted here and there, but you could also do a faster calculation first, like, compare the bounding boxes for overlap, and THEN do a radius-based overlap if that first test passes.
The addition/difference math is much faster for a bounding box than all the trig for the radius, and most times, the bounding box test will dismiss the possibility of a collision. But if you then re-test with trig, you're getting the accurate results that you're seeking.
Yes, it's two tests, but it will be faster overall.

This KineticModel is an implementation of the cited approach in Java.

I implemented this code in JavaScript using the HTML Canvas element, and it produced wonderful simulations at 60 frames per second. I started the simulation off with a collection of a dozen balls at random positions and velocities. I found that at higher velocities, a glancing collision between a small ball and a much larger one caused the small ball to appear to STICK to the edge of the larger ball, and moved up to around 90 degrees around the larger ball before separating. (I wonder if anyone else observed this behavior.)
Some logging of the calculations showed that the Minimum Translation Distance in these cases was not large enough to prevent the same balls from colliding in the very next time step. I did some experimenting and found that I could solve this problem by scaling up the MTD based on the relative velocities:
dot_velocity = ball_1.velocity.dot(ball_2.velocity);
mtd_factor = 1. + 0.5 * Math.abs(dot_velocity * Math.sin(collision_angle));
mtd.multplyScalar(mtd_factor);
I verified that before and after this fix, the total kinetic energy was conserved for every collision. The 0.5 value in the mtd_factor was the approximately the minumum value found to always cause the balls to separate after a collision.
Although this fix introduces a small amount of error in the exact physics of the system, the tradeoff is that now very fast balls can be simulated in a browser without decreasing the time step size.

Improving the solution to detect circle with circle collision detection given within the question:
float dx = circle1.x - circle2.x,
dy = circle1.y - circle2.y,
r = circle1.r + circle2.r;
return (dx * dx + dy * dy <= r * r);
It avoids the unnecessary "if with two returns" and the use of more variables than necessary.

After some trial and error, I used this document's method for 2D collisions : https://www.vobarian.com/collisions/2dcollisions2.pdf
(that OP linked to)
I applied this within a JavaScript program using p5js, and it works perfectly. I had previously attempted to use trigonometrical equations and while they do work for specific collisions, I could not find one that worked for every collision no matter the angle at the which it happened.
The method explained in this document uses no trigonometrical functions whatsoever, it's just plain vector operations, I recommend this to anyone trying to implement ball to ball collision, trigonometrical functions in my experience are hard to generalize. I asked a Physicist at my university to show me how to do it and he told me not to bother with trigonometrical functions and showed me a method that is analogous to the one linked in the document.
NB : My masses are all equal, but this can be generalised to different masses using the equations presented in the document.
Here's my code for calculating the resulting speed vectors after collision :
//you just need a ball object with a speed and position vector.
class TBall {
constructor(x, y, vx, vy) {
this.r = [x, y];
this.v = [0, 0];
}
}
//throw two balls into this function and it'll update their speed vectors
//if they collide, you need to call this in your main loop for every pair of
//balls.
function collision(ball1, ball2) {
n = [ (ball1.r)[0] - (ball2.r)[0], (ball1.r)[1] - (ball2.r)[1] ];
un = [n[0] / vecNorm(n), n[1] / vecNorm(n) ] ;
ut = [ -un[1], un[0] ];
v1n = dotProd(un, (ball1.v));
v1t = dotProd(ut, (ball1.v) );
v2n = dotProd(un, (ball2.v) );
v2t = dotProd(ut, (ball2.v) );
v1t_p = v1t; v2t_p = v2t;
v1n_p = v2n; v2n_p = v1n;
v1n_pvec = [v1n_p * un[0], v1n_p * un[1] ];
v1t_pvec = [v1t_p * ut[0], v1t_p * ut[1] ];
v2n_pvec = [v2n_p * un[0], v2n_p * un[1] ];
v2t_pvec = [v2t_p * ut[0], v2t_p * ut[1] ];
ball1.v = vecSum(v1n_pvec, v1t_pvec); ball2.v = vecSum(v2n_pvec, v2t_pvec);
}

I would consider using a quadtree if you have a large number of balls. For deciding the direction of bounce, just use simple conservation of energy formulas based on the collision normal. Elasticity, weight, and velocity would make it a bit more realistic.

Here is a simple example that supports mass.
private void CollideBalls(Transform ball1, Transform ball2, ref Vector3 vel1, ref Vector3 vel2, float radius1, float radius2)
{
var vec = ball1.position - ball2.position;
float dis = vec.magnitude;
if (dis < radius1 + radius2)
{
var n = vec.normalized;
ReflectVelocity(ref vel1, ref vel2, ballMass1, ballMass2, n);
var c = Vector3.Lerp(ball1.position, ball2.position, radius1 / (radius1 + radius2));
ball1.position = c + (n * radius1);
ball2.position = c - (n * radius2);
}
}
public static void ReflectVelocity(ref Vector3 vel1, ref Vector3 vel2, float mass1, float mass2, Vector3 intersectionNormal)
{
float velImpact1 = Vector3.Dot(vel1, intersectionNormal);
float velImpact2 = Vector3.Dot(vel2, intersectionNormal);
float totalMass = mass1 + mass2;
float massTransfure1 = mass1 / totalMass;
float massTransfure2 = mass2 / totalMass;
vel1 += ((velImpact2 * massTransfure2) - (velImpact1 * massTransfure2)) * intersectionNormal;
vel2 += ((velImpact1 * massTransfure1) - (velImpact2 * massTransfure1)) * intersectionNormal;
}

Related

Splitting quadratic Bezier curve into several unequal parts

I am dealing with clipping of quadratic Beziér curves. Clipping is a standard graphics task. Typically, no matter what we display on a screen, we only want to render the part that fits into the screen bounds, as an optimization.
For straight lines, there is something called Cohen-Sutherland algorithm, and a slightly extended version of this algorithm is the Sutherland–Hodgman algorithm, where the first solution is for dealing with lines and the second one for polygons.
Essentially, the algorithms split the computer screen into tik-tac-toe -like squares, where the central square is what fits on the screen, and we special case each of left/right and above/below. After, when one end of the line is right off the screen and the other is not, we replace the x coordinate for this point with the screen's max value of x, and calculate the y value for it. This becomes the new endpoint of the clipped line. Pretty simple and it works well.
With Beziér curves, the same approach can be taken, only in addition to the ends, we need to consider control points. In the case of a quadratic curve, there is only one control.
To clip the curve, we can do something very similar to Cohen-Sutherland. Only, depending on the situation, we might need to cut the original curve into up to five (5) pieces. Just like both ends of a straight line might be offscreen, while the center is visible, the same situation needs to be handled with curves, yet here we only need to deal with the hull [height] of the curve causing a mid-section to be invisible. Therefore, we might end up with two new curves, after the clipping.
Finding one of the coordinates for these curves is pretty easy. It is still the min/max coordinate for one of the axis, and the value of the other coordinate. There is prior art for this, for example even calculate x for y is a good starting point. We want to adopt the formula so vectors turn into separate x and y coordinates, but the rest is doable.
Next, however, we still have an unsolved problem these one or two new curves, are completely new quadratic curves and each will have therefore a new control point.
There is a thread at split quadratic curve into two where the author seems to be doing kind of what I need, albeit in a slightly different way. There is an accepted answer, yet I could not get the results to work.
I want to end-up with a function like:
function clipSegment(sx, sy, cx, cy, ex, ey, bounds) {
let curves: {sx, sy, cx, cy, ex, ey}[] = [];
...
return curves;
}
It should take coordinates and the bounds object that would have both min and max for both x and y coordinates. I think that Cohen-Sutherland approach with squares and bit-codes should work here just as well. We get more cases for curves, but everything is doable. My problem is the new control point coordinates. For example, we could calculating t from one of the coordinates, doing something like:
function getQuadraticPoint(t, sx, sy, cp1x, cp1y, ex, ey) {
const x = (1 - t) * (1 - t) * sx + 2 * (1 - t) * t * cp1x + t * t * ex;
const y = (1 - t) * (1 - t) * sy + 2 * (1 - t) * t * cp1y + t * t * ey;
return { x, y };
}
Once we have the new start and/or beginning, how do we get the new control points?
Some developers I found online, working on similar problems, recommended just working with t and changing the interval from t from 0 to 1 to 0 to t. This however won't work easily for Canvas 2D API. The 2D Path thing needs the control point and the end point [after the pen move to the beginning with moveTo].
I believe that the quadratic Beziér case should have a closed-form solution. Yet, I have not figured out what it is. Any ideas?

Algo - Ray tracing : spheres like eggs

I am currently working on a project called "Raytracer" in c.
I encounter a problem, the spheres are oval when they are not centered.
Here is an excerpt of my code:
int i;
int j;
t_ray vect;
i = -1;
vect.x = 100. - cam.x;
while (++i < screenx)
{
j = -1;
vect.y = ((screenx / 2.) - i - cam.y) * -1.;
while (++j < screeny)
{
vect.z = (screeny / 2.) - j - cam.z;
}
}
This is likely not a bug, but simply a reality of how perspective projections work. When the camera is directly looking at a sphere, the projection is circular, but as it moves away from the center, it distorts. For more info read this link in the POV-Ray wiki: http://wiki.povray.org/content/Knowledgebase:Misconceptions#Topic_3
In that way the vector has different length on different pixels. You should normalize the vector at the end (dividing the components by the vector length)
It's probably late now, but to give you an answer, your "problem" is in reality called "fish-eye". I encounted this problem too. there're many ways to avoid this problem. The easiest is to increase the distance between the camera and your scene. It's not the cleaner way.
You also can normalize your rays, here are some reasons :
.keep the same distance ratio for every rays
.keep the same angle difference between every ray and its neighbors
.it makes many intersection computations ways easier

Plotting graphs using Bezier curves

I have an array of points (x0,y0)... (xn,yn) monotonic in x and wish to draw the "best" curve through these using Bezier curves. This curve should not be too "jaggy" (e.g. similar to joining the dots) and not too sinuous (and definitely not "go backwards"). I have created a prototype but wonder whether there is an objectively "best solution".
I need to find control points for all segments xi,y1 x(i+1)y(i+1). My current approach (except for the endpoints) for a segment x(i), x(i+1) is:
find the vector x(i-1)...x(i+1) , normalize, and scale it by factor * len(i,i+1) to give the vector for the leading control point
find the vector x(i+2)...x(i) , normalize, and scale it by factor * len(i,i+1) to give the vector for the trailing control point.
I have tried factor=0.1 (too jaggy), 0.33 (too curvy) and 0.20 - about right. But is there a better approach which (say) makes 2nd and 3nd derivatives as smooth as possible. (I assume such an algorithm is implemented in graphics packages)?
I can post pseudo/code if requested. Here are the three images (0.1/0.2/0.33). The control points are shown by straight lines: black (trailing) and red (leading)
Here's the current code. It's aimed at plotting Y against X (monotonic X) without close-ing. I have built my own library for creating SVG (preferred output); this code creates triples of x,y in coordArray for each curve segment (control1, xcontrol2, end). Start is assumed by last operation (Move or Curve). It's Java but should be easy to interpret (CurvePrimitive maps to cubic, "d" is the String representation of the complete path in SVG).
List<SVGPathPrimitive> primitiveList = new ArrayList<SVGPathPrimitive>();
primitiveList.add(new MovePrimitive(real2Array.get(0)));
for(int i = 0; i < real2Array.size()-1; i++) {
// create path 12
Real2 p0 = (i == 0) ? null : real2Array.get(i-1);
Real2 p1 = real2Array.get(i);
Real2 p2 = real2Array.get(i+1);
Real2 p3 = (i == real2Array.size()-2) ? null : real2Array.get(i+2);
Real2Array coordArray = plotSegment(factor, p0, p1, p2, p3);
SVGPathPrimitive primitive = new CurvePrimitive(coordArray);
primitiveList.add(primitive);
}
String d = SVGPath.constructDString(primitiveList);
SVGPath path1 = new SVGPath(d);
svg.appendChild(path1);
/**
*
* #param factor to scale control points by
* #param p0 previous point (null at start)
* #param p1 start of segment
* #param p2 end of segment
* #param p3 following point (null at end)
* #return
*/
private Real2Array plotSegment(double factor, Real2 p0, Real2 p1, Real2 p2, Real2 p3) {
// create p1-p2 curve
double len12 = p1.getDistance(p2) * factor;
Vector2 vStart = (p0 == null) ? new Vector2(p2.subtract(p1)) : new Vector2(p2.subtract(p0));
vStart = new Vector2(vStart.getUnitVector().multiplyBy(len12));
Vector2 vEnd = (p3 == null) ? new Vector2(p2.subtract(p1)) : new Vector2(p3.subtract(p1));
vEnd = new Vector2(vEnd.getUnitVector().multiplyBy(len12));
Real2Array coordArray = new Real2Array();
Real2 controlStart = p1.plus(vStart);
coordArray.add(controlStart);
Real2 controlEnd = p2.subtract(vEnd);
coordArray.add(controlEnd);
coordArray.add(p2);
// plot controls
SVGLine line12 = new SVGLine(p1, controlStart);
line12.setStroke("red");
svg.appendChild(line12);
SVGLine line21 = new SVGLine(p2, controlEnd);
svg.appendChild(line21);
return coordArray;
}
A Bezier curve requires the data points, along with the slope and curvature at each point. In a graphics program, the slope is set by the slope of the control-line, and the curvature is visualized by the length.
When you don't have such control-lines input by the user, you need to estimate the gradient and curvature at each point. The wikipedia page http://en.wikipedia.org/wiki/Cubic_Hermite_spline, and in particular the 'interpolating a data set' section has a formula that takes these values directly.
Typically, estimating these values from points is done using a finite difference - so you use the values of the points on either side to help estimate. The only choice here is how to deal with the end points where there is only one adjacent point: you can set the curvature to zero, or if the curve is periodic you can 'wrap around' and use the value of the last point.
The wikipedia page I referenced also has other schemes, but most others introduce some other 'free parameter' that you will need to find a way of setting, so in the absence of more information to help you decide how to set other parameters, I'd go for the simple scheme and see if you like the results.
Let me know if the wikipedia article is not clear enough, and I'll knock up some code.
One other point to be aware of: what 'sort' of Bezier interpolation are you after? Most graphics programs do cubic bezier in 2 dimensions (ie you can draw a circle-like curve), but your sample images look like it could be 1d functions approximation (as in for every x there is only one y value). The graphics program type curve is not really mentioned on the page I referenced. The maths involved for converting estimate of slope and curvature into a control vector of the form illustrated on http://en.wikipedia.org/wiki/B%C3%A9zier_curve (Cubic Bezier) would take some working out, but the idea is similar.
Below is a picture and algorithm for a possible scheme, assuming your only input is the three points P1, P2, P3
Construct a line (C1,P1,C2) such that the angles (P3,P1,C1) and (P2,P1,C2) are equal. In a similar fashion construct the other dark-grey lines. The intersections of these dark-grey lines (marked C1, C2 and C3) become the control-points as in the same sense as the images on the Bezier Curve wikipedia site. So each red curve, such as (P3,P1), is a quadratic bezier curve defined by the points (P3, C1, P1). The construction of the red curve is the same as given on the wikipedia site.
However, I notice that the control-vector on the Bezier Curve wikipedia page doesn't seem to match the sort of control vector you are using, so you might have to figure out how to equate the two approaches.
I tried this with quadratic splines instead of cubic ones which simplifies the selection of control points (you just choose the gradient at each point to be a weighted average of the mean gradients of the neighbouring intervals, and then draw tangents to the curve at the data points and stick the control points where those tangents intersect), but I couldn't find a sensible policy for setting the gradients of the end points. So I opted for Lagrange fitting instead:
function lagrange(points) { //points is [ [x1,y1], [x2,y2], ... ]
// See: http://www.codecogs.com/library/maths/approximation/interpolation/lagrange.php
var j,n = points.length;
var p = [];
for (j=0;j<n;j++) {
p[j] = function (x,j) { //have to pass j cos JS is lame at currying
var k, res = 1;
for (k=0;k<n;k++)
res*=( k==j ? points[j][1] : ((x-points[k][0])/(points[j][0]-points[k][0])) );
return res;
}
}
return function(x) {
var i, res = 0;
for (i=0;i<n;i++)
res += p[i](x,i);
return res;
}
}
With that, I just make lots of samples and join them with straight lines.
This is still wrong if your data (like mine) consists of real world measurements. These are subject to random errors and if you use a technique that forces the curve to hit them all precisely, then you can get silly valleys and hills between the points. In cases like these, you should ask yourself what order of polynomial the data should fit and ... well ... that's what I'm about to go figure out.

Ray Trace Refraction looks "fake"

So in my ray tracer that I'm building I've gotten refraction to work for spheres along with caustic effects, however the glass spheres don't look particularly good. I believe the refracted math is correct because the light appears to be bending in inverting the way you'd expect it to however it doesn't look like glass, it just looks like paper or something.
I've read that total internal reflection is responsible for much of what makes glass look like it does, however when I tested to see if any of my refracted rays go above the critical angle, none of them do, so my glass sphere has no total internal reflection. I'm not sure if this is normal or if I've done something wrong. I've posted my refraction code below so if anyone has any suggestions I'd love to hear them.
/*
* Parameter 'dir' lets you know whether the ray is starting
* from outside the sphere going in (0) or from in the sphere
* going back out (1).
*/
void Ray::Refract(Intersection *hit, int dir)
{
float n1, n2;
if(dir == 0){ n1 = 1.0; n2 = hit->mat->GetRefract(); }
if(dir == 1){ n1 = hit->mat->GetRefract(); n2 = 1.0; }
STVector3 N = hit->normal/hit->normal.Length();
if(dir == 1) N = -N;
STVector3 V = D/D.Length();
double c1 = -STVector3::Dot(N, V);
double n = n1/n2;
double c2 = sqrt(1.0f - (n*n)*(1.0f - (c1*c1)));
STVector3 Rr = (n * V) + (n * c1 - c2) * N;
/*These are the parameters of the current ray being updated*/
E = hit->point; //Starting point
D = Rr; //Direction
}
This method is called during my main ray-tracing method RayTrace() which runs recursively. Here's a small section of it below that is in charge of refraction:
if (hit->mat->IsRefractive())
{
temp.Refract(hit, dir); //Temp is my ray that is being refracted
dir++;
result += RayTrace(temp, dir); //Result is the return RGB value.
}
You're right, you'll never get total internal reflection in a sphere (seen from the outside). That's because of symmetry: a ray inside a sphere will hit the surface at the same angle at both ends, meaning that, if it exceeds the critical angle at one end, it would have to exceed it at the other one as well (and so would not have been able to enter the sphere from the outside in the first place).
However, you'll still get a lot of partial reflection according to Fresnel's law. It doesn't look like your code accounts for that, which is probably why your glass looks fake. Try including that and see if it helps.
(Yes, it means that your rays will split in two whenever they hit a refractive surface. That happens in reality, so you'll just have to live with it. You can either trace both paths, or, if you're using a randomized ray tracing algorithm anyway, just sample one of them randomly with the appropriate weights.)
Ps. If you don't want to deal with stuff like light polarization, you may want to just use Schlick's approximation to the Fresnel equations.

Why won't my raytracer recreate the "mount" scene?

I'm trying to render the "mount" scene from Eric Haines' Standard Procedural Database (SPD), but the refraction part just doesn't want to co-operate. I've tried everything I can think of to fix it.
This one is my render (with Watt's formula):
(source: philosoraptor.co.za)
This is my render using the "normal" formula:
(source: philosoraptor.co.za)
And this one is the correct render:
(source: philosoraptor.co.za)
As you can see, there are only a couple of errors, mostly around the poles of the spheres. This makes me think that refraction, or some precision error is to blame.
Please note that there are actually 4 spheres in the scene, their NFF definitions (s x_coord y_coord z_coord radius) are:
s -0.8 0.8 1.20821 0.17
s -0.661196 0.661196 0.930598 0.17
s -0.749194 0.98961 0.930598 0.17
s -0.98961 0.749194 0.930598 0.17
That is, there is a fourth sphere behind the more obvious three in the foreground. It can be seen in the gap left between these three spheres.
Here is a picture of that fourth sphere alone:
(source: philosoraptor.co.za)
And here is a picture of the first sphere alone:
(source: philosoraptor.co.za)
You'll notice that many of the oddities present in both my version and the correct version is missing. We can conclude that these effects are the result of interactions between the spheres, the question is which interactions?
What am I doing wrong? Below are some of the potential errors I've already considered:
Refraction vector formula.
As far as I can tell, this is correct. It's the same formula used by several websites and I verified the derivation personally. Here's how I calculate it:
double sinI2 = eta * eta * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - sqrt(1.0f - sinI2)));
transmit = transmit.normalise();
I found an alternate formula in 3D Computer Graphics, 3rd Ed by Alan Watt. It gives a closer approximation to the correct image:
double etaSq = eta * eta;
double sinI2 = etaSq * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - (sqrt(1.0f - sinI2) / etaSq)));
transmit = transmit.normalise();
The only difference is that I'm dividing by eta^2 at the end.
Total internal reflection.
I tested for this, using the following conditional before the rest of my intersection code:
if (sinI2 <= 1)
Calculation of eta.
I use a stack-like approach for this problem:
/* Entering object. */
if (r.normal.dot(r.dir) < 0)
{
double eta1 = r.iorStack.back();
double eta2 = m.ior;
eta = eta1 / eta2;
r.iorStack.push_back(eta2);
}
/* Exiting object. */
else
{
double eta1 = r.iorStack.back();
r.iorStack.pop_back();
double eta2 = r.iorStack.back();
eta = eta1 / eta2;
}
As you can see, this stores the previous objects that contained this ray in a stack. When exiting the code pops the current IOR off the stack and uses that, along with the IOR under it to compute eta. As far as I know this is the most correct way to do it.
This works for nested transmitting objects. However, it breaks down for intersecting transmitting objects. The problem here is that you need to define the IOR for the intersection independently, which the NFF file format does not do. It's unclear then, what the "correct" course of action is.
Moving the new ray's origin.
The new ray's origin has to be moved slightly along the transmitted path so that it doesn't intersect at the same point as the previous one.
p = r.intersection + transmit * 0.0001f;
p += transmit * 0.01f;
I've tried making this value smaller (0.001f) and (0.0001f) but that makes the spheres appear solid. I guess these values don't move the rays far enough away from the previous intersection point.
EDIT: The problem here was that the reflection code was doing the same thing. So when an object is reflective as well as refractive then the origin of the ray ends up in completely the wrong place.
Amount of ray bounces.
I've artificially limited the amount of ray bounces to 4. I tested raising this limit to 10, but that didn't fix the problem.
Normals.
I'm pretty sure I'm calculating the normals of the spheres correctly. I take the intersection point, subtract the centre of the sphere and divide by the radius.
Just a guess based on doing a image diff (and without reading the rest of your question). The problem looks to me to be the refraction on the back side of the sphere. You might be:
doing it backwards: e.g. reversing (or not reversing) the indexes of refraction.
missing it entirely?
One way to check for this would be to look at the mount through a cube that is almost facing the camera. If the refraction is correct, the picture should be offset slightly but otherwise un-altered. If it's not right, then the picture will seem slightly tilted.
So after more than I year, I finally figured out what was going on here. Clear minds and all that. I was completely off track with the formula. I'm instead using a formula by Heckbert now, which I am sure is correct because I proved it myself using geometry and discrete math.
Here's the correct vector calculation:
double c1 = v.dot(n) * -1;
double c1Sq = pow(c1, 2);
/* Heckbert's formula requires eta to be eta2 / eta1, so I have to flip it here. */
eta = 1 / eta;
double etaSq = pow(eta, 2);
if (etaSq + c1Sq >= 1)
{
Vector transmit = (v / eta) + (n / eta) * (c1 - sqrt(etaSq - 1 + c1Sq));
transmit = transmit.normalise();
...
}
else
{
/* Total internal reflection. */
}
In the code above, eta is eta1 (the IOR of the surface from which the ray is coming) over eta2 (the IOR of the destination surface), v is the incident ray and n is the normal.
There was another problem, which confused the problem some more. I had to flip the normal when exiting an object (which is obvious - I missed it because the other errors were obscuring it).
Lastly, my line of sight algorithm (to determine whether a surface is illuminated by a point light source) was not properly passing through transparent surfaces.
So now my images line up properly :)

Resources