I have recently tried to change the current way I calculate diffuse lighting in my RayTracer. It used to be calculated like this:
float lambert = (light_ray_dir * normal) * coef;
red += lambert * current.color.red * mat.kd.red;
green += lambert * current.color.green * mat.kd.green;
blue += lambert * current.color.blue * mat.kd.blue;
where coef is an attenuation coefficient that starts at 1 for each pixel and is then attenuated for each reflected ray that is generated by this line:
coef *= mat.reflection;
This worked well.
But I decided to try something more realistic and implemented this:
float squared_attenuation = LIGHT_FALLOFF * lenght;
light_intensity.setX ((current.color.red /*INTENSITY*/)/ squared_attenuation);
light_intensity.setY ((current.color.green /*INTENSITY*/)/ squared_attenuation);
light_intensity.setZ ((current.color.blue /*INTENSITY*/)/ squared_attenuation);
red += ALBEDO * light_intensity.getX() * lambert * mat.kd.red;
green += ALBEDO * light_intensity.getY() * lambert * mat.kd.green;
blue += ALBEDO * light_intensity.getZ() * lambert * mat.kd.blue;
where LIGHT_FALLOFF it is a constant value:
#define LIGHT_FALLOFF M_PI * 4
and length it is the length of the vector that goes from the point light center to the intersect point:
inline float normalize_return_lenght () {
float lenght = sqrtf (x*x + y*y + z*z);
float inv_length = 1/lenght;
x = x * inv_length, y = y * inv_length, z = z * inv_length;
return lenght;
}
float lenght = light_ray_dir.normalize_return_lenght ();
The problem is that all this is generating nothing more than a black screen! The main culprit is the length that goes as the divisor in the light_intensity.set. It makes the final color values being some value ^ -5. However, even if I replace it by one (ruining my goal of a realistic light attenuation), I still get color values to close to zero still, hence a black image.
I tried to add another light_sources closer to the objects, however the fact that the models that shall be shaded are made of multiple polygons with different coordinates, make hard to determine a goodl ocation for them.
So I ask. It seems normal to you that this is happening or it seems a bug? For me theory, does not seems strange for me, since the attenuation is quadratic.
Is does not seem, there is a some hint as to where to place the light sources, or to anything as can get an image that is not all black?
Thanks for reading all this!
P.S: Intensity is commented out because on the example that I used to do my code, it was one
So you are assuming that the light has a luminosity of "1" ?
How far away is your light?
If your light is - say - 10 units away, then they contribution from the light will be 1/10, or a very small number. This is probably why your image is dark.
You need to have quite large numbers for your light intensity if you are going to do this. In one of my scenes, I have a light that is about 1000 units away (pretending to be the Sun) and the intensity is 380000!!
Another thing ... to simulate reality, you should be using 1 / length^2. The light intensity falls off with the square of distance, not just with distance.
Good luck!
Related
I'm trying to draw simple scaled points in my custom graphics engine. The points are scaled in pixel space, and the radius of the points are in pixels, but the position of the points fed to the draw function are in world coordinates.
So far, everything is working great, except for a depth clipping issue. The points are of constant size, regardless of how far away they are, which is done by offsetting the vertices in projected/clip space. However, when they are close to surfaces, they partially intersect them in the depth buffer.
Since these points represent world coordinates, I want them to use the depth buffer, and be hidden behind objects that are in front of them. However, when the point is close to a surface, I want to push it toward the camera, so it doesn't partially intersect it. I think it is easier to just always do this push, regardless of the point being close to a surface. What makes the most sense to me is to just push it by its radius, so that all of its vertices are exactly far enough away to avoid clipping into nearby surfaces.
The easiest way I've found to do this is to simply subtract from the Z value in the vertex shader, after transforming into view-projection space. However, I'm having some trouble converting my pixel radius into a depth offset. Regardless of the math I use, what works close up never seems to work far away. I'm thinking maybe this is due to how the z buffer is non-linear, but could be wrong.
Currently, the closest I've been to solving this is the following:
proj_vertex_pos.z -= point_pixel_radius / proj_vertex_pos.w * 100.0
I'm honestly not sure why 100.0 helps make this work yet. I added it simply because dividing the radius by w was too small of a value. Can anyone point me in the right direction? How do I convert my pixel distance into a depth distance? Especially if the depth distance changes scale depending on which depth you are at? Or am I just way off?
The solution was to convert my pixel space radius into world space units, since the z-buffer is still in world space, even after transforming by the view-projection transform. This can be done by converting pixels into a factor (factor = pixels / screen_size), then convert the factor into world space units, which was a little more involved - I had to calculate the world-space size of the screen at a given distance, then multiply the factor by that to get world units. I can post the related code if anyone needs it. There's probably a simpler way to calculate it, but my brain always goes straight for factors.
The reason I was getting different results at different distances was mainly because I was only offsetting the z component of the clip position by the result. It's also necessary to offset the w component, to make the depth offset work at any distance (linear). However, in order to offset the w component, you first have to scale xy by w, modify w as needed, then divide xy by the new w. This resulted in making the math pretty involved, so I changed the strategy to offset the vertex before clip space, which requires calculating the distance to the camera in Z space manually, but it honestly ended up being about the same amount of math either way.
Here is the final vertex shader at the moment. Hopefully the global values make sense. I did not modify this to post it, so please forgive any sillyness in my comments. EDIT: I had to make some edits to this, because I was accidentally moving the vertex along the camera-Z direction instead of directly toward the camera:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// compute offset from vertex to camera
float3 to_cam_offset = Scene.CamPos - vin.Position.xyz;
// compute the Z distance of the camera from the vertex
float cam_z_dist = -dot( Scene.CamZ, to_cam_offset );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * cam_z_dist * 2.0;
// finally, push the vertex toward the camera by the world radius
// + note: moving by radius will only work with surfaces facing the camera, since we are moving toward the camera, rather than away from the surface
// + because of this, we also multiply by another 4, to compensate for nearby surface angles, but there is no scale that would work for every angle
float3 offset = normalize(to_cam_offset) * (radius_world * -4.0);
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz + offset, 1.0) );
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
Here is the other version that offsets z & w instead of changing things in world space. After edits above, this is probably the more optimal solution:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0) );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * pin.ClipPos.w * 2.0;
// offset depth by our world radius
// + we scale this extra to compensate for surfaces with high angles relative to the camera (since we are moving directly at it)
// + notice we have to make the perspective divide before modifying w, then re-apply it after, or xy will be off
pin.ClipPos.xy /= pin.ClipPos.w;
pin.ClipPos.z -= radius_world * 10.0;
pin.ClipPos.w -= radius_world * 10.0;
pin.ClipPos.xy *= pin.ClipPos.w;
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
I am doing a project about ray tracing, right now I can do some basic rendering.
The image below have:
mirror reflection,
refraction,
texture mapping
and shadow.
I am trying to do the glossy reflection, so far this is what I am getting.
Could anyone tell me if there is any problem in this glossy reflection image?
In comparison, the image below is from the mirror reflection
This is my code about glossy reflection, basically, once a primary ray intersect with an object.
From this intersection, it will randomly shot another 80 rays, and take the average of this 80 rays color. The problem I am having with this code is the magnitude of x and y, I have to divide them by some value, in this case 16, such that the glossy reflect ray wouldn't be too random. Is there anything wrong with this logic?
Colour c(0, 0, 0);
for (int i = 0; i < 80; i++) {
Ray3D testRay;
double a = rand() / (double) RAND_MAX;
double b = rand() / (double) RAND_MAX;
double theta = acos(pow((1 - a), ray.intersection.mat->reflectivity));
double phi = 2 * M_PI * b;
double x = sin(phi) * cos(theta)/16;
double y = sin(phi) * sin(theta)/16;
double z = cos(phi);
Vector3D u = reflect.dir.cross(ray.intersection.normal);
Vector3D v = reflect.dir.cross(u);
testRay.dir = x * u + y * v + reflect.dir;
testRay.dir.normalize();
testRay.origin = reflect.origin;
testRay.nbounces = reflect.nbounces;
c = c + (ray.intersection.mat->reflectivity)*shadeRay(testRay);
}
col = col + c / 80;
Apart from the hard coded constants which are never great when coding, there is a more subtle issue, although your images overall look good.
Monte-Carlo integration consists in summing the integrand divided by the probability density function (pdf) that generated these samples. There are thus two problems in your code:
you haven't divided by the pdf although you seem to have used a pdf for Phong models (if I recognized it well ; at least it is not a uniform pdf)
you have further scaled your x and y components by 1./16. for apparently no reason which further changes your pdf.
The idea is that if you are able to sample your rays exactly according to Phong's model times the cosine law then you don't even have to multiply your integrand by the BRDF. In practice, there are no exact formula that allow to sample exactly a BRDF (apart from Lambertian ones), so you need to compute:
pixel value = sum BRDF*cosine*incoming_light / pdf
which mostly cancels out if BRDF*cosine = pdf.
Of course, your images overall look good, so if you're not interested in physical plausibility, that may as well be good.
A good source about the various pdfs used in computer graphics and Monte-Carlo integration (with the appropriate formulas) is the Global Illumination Compendium by Philip Dutré.
Ihave a bing map, and a two points :
Point1,Point2 and i want to calculate the distance between these two points? is that possible?
and if i want to put a circle on the two third of the path between point1 and point2 and near point2 ...how can i make it?
Microsoft has a GeoCoordinate.GetDistanceTo Method, which uses the Haversine formula.
For me other implementation return NaN for distances that are too small. I haven't run into any issues with the built in function yet.
See Haversine or even better the Vincenty formula how to solve this problem.
The following code uses haversines way to get the distance:
public double GetDistanceBetweenPoints(double lat1, double long1, double lat2, double long2)
{
double distance = 0;
double dLat = (lat2 - lat1) / 180* Math.PI;
double dLong = (long2 - long1) / 180 * Math.PI;
double a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2)
+ Math.Cos(lat1 / 180* Math.PI) * Math.Cos(lat2 / 180* Math.PI)
* Math.Sin(dLong/2) * Math.Sin(dLong/2);
double c = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a));
//Calculate radius of earth
// For this you can assume any of the two points.
double radiusE = 6378135; // Equatorial radius, in metres
double radiusP = 6356750; // Polar Radius
//Numerator part of function
double nr = Math.Pow(radiusE * radiusP * Math.Cos(lat1 / 180 * Math.PI), 2);
//Denominator part of the function
double dr = Math.Pow(radiusE * Math.Cos(lat1 / 180 * Math.PI), 2)
+ Math.Pow(radiusP * Math.Sin(lat1 / 180 * Math.PI), 2);
double radius = Math.Sqrt(nr / dr);
//Calculate distance in meters.
distance = radius * c;
return distance; // distance in meters
}
You can find a good site with infos here.
You can use a geographic library for (re-)projection and calculation operations if you need more accurate results or want to do some math operations (e.g. transform a circle onto a sperioid/projection). Take a look at DotSpatial or SharpMap and the samples/unittests/sources there... this might help to solve your problem.
Anyway if you know the geodesic distance and bearing you can also calculate where resulting target position (center of your circle) is, e.g. see "Direct Problem" of Vincenty's algorithms. Here are also some useful algorithm implementations for silverlight/.net
You might also consider to post your questions at GIS Stackexchange. They discuss GIS related problems like yours. Take a look at the question for calculating lat long x-miles from point (as you already know the whole distance now) or see the discussion here about distance calculations. This question is related to the problem how to draw a point on a line in a given distance and is nearly the same (cause you need a center and radius).
Another option is to use ArcGIS API for Silverlight which can also display Bing Maps. It is open source and you can learn the things you need there (or just use them, cause they already exists in the SDK). See the Utilities examples tab within the samples.
As I already mentioned: Take a look at this page to get more infos regarding your problem. There you'll find a formula, Javascript code and an Excel sample for calculating a destination point by a given distance and bearing from start point (see headlines there).
It shouldn't be difficult to "transform" the code to your c#-world.
I'm making a SHMUP game that has a space ship. That space ship currently fires a main cannon from its center point. The sprite that represents the ship has a center based registration point. 0,0 is center of the ship.
When I fire the main cannon i make a bullet and assign make its x & y coordinates match the avatar and add it to the display list. This works fine.
I then made two new functions called fireLeftCannon, fireRightCannon. These create a bullet and add it to the display list but the x, y values are this.y + 15 and this.y +(-) 10. This creates a sort of triangle of bullet entry points.
Similar to this:
▲
▲ ▲
the game tick function will adjust the avatar's rotation to always point at the cursor. This is my aiming method. When I shoot straight up all 3 bullets fire up in the expected pattern. However when i rotate and face the right the entry points do not rotate. This is not an issue for the center point main cannon.
My question is how do i use the current center position ( this.x, this.y ) and adjust them based on my current rotation to place a new bullet so that it is angled correctly.
Thanks a lot in advance.
Tyler
EDIT
OK i tried your solution and it didn't work. Here is my bullet move code:
var pi:Number = Math.PI
var _xSpeed:Number = Math.cos((_rotation - 90) * (pi/180) );
var _ySpeed:Number = Math.sin((_rotation - 90) * (pi / 180) );
this.x += (_xSpeed * _bulletSpeed );
this.y += (_ySpeed * _bulletSpeed );
And i tried adding your code to the left shoulder cannon:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation) ) * ( this.x - 10 ) - Math.sin( StaticMath.ToRad(this.rotation)) * ( this.x - 10 );
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * ( this.y + 15 ) + Math.cos( StaticMath.ToRad(this.rotation)) * ( this.y + 15 );
This is placing the shots a good deal away from the ship and sometimes off screen.
How am i messing up the translation code?
What you need to start with is, to be precise, the coordinates of your cannons in the ship's coordinate system (or “frame of reference”). This is like what you have now but starting from 0, not the ship's position, so they would be something like:
(0, 0) -- center
(10, 15) -- left shoulder
(-10, 15) -- right shoulder
Then what you need to do is transform those coordinates into the coordinate system of the world/scene; this is the same kind of thing your graphics library is doing to draw the sprite.
In your particular case, the intervening transformations are
world ←translation→ ship position ←rotation→ ship positioned and rotated
So given that you have coordinates in the third frame (how the ship's sprite is drawn), you need to apply the rotation, and then apply the translation, at which point you're in the first frame. There are two approaches to this: one is matrix arithmetic, and the other is performing the transformations individually.
For this case, it is simpler to skip the matrices unless you already have a matrix library handy already, in which case you should use it — calculate "ship's coordinate transformation matrix" once per frame and then use it for all bullets etc.
I'll now explain doing it directly.
The general method of applying a rotation to coordinates (in two dimensions) is this (where (x1,y1) is the original point and (x2,y2) is the new point):
x2 = cos(angle)*x1 - sin(angle)*y1
y2 = sin(angle)*x1 + cos(angle)*y1
Whether this is a clockwise or counterclockwise rotation will depend on the “handedness” of your coordinate system; just try it both ways (+angle and -angle) until you have the right result. Don't forget to use the appropriate units (radians or degrees, but most likely radians) for your angles given the trig functions you have.
Now, you need to apply the translation. I'll continue using the same names, so (x3,y3) is the rotated-and-translated point. (dx,dy) is what we're translating by.
x3 = dx + x2
y3 = dy + x2
As you can see, that's very simple; you could easily combine it with the rotation formulas.
I have described transformations in general. In the particular case of the ship bullets, it works out to this in particular:
bulletX = shipPosX + cos(shipAngle)*gunX - sin(shipAngle)*gunY
bulletY = shipPosY + sin(shipAngle)*gunX + cos(shipAngle)*gunY
If your bullets are turning the wrong direction, negate the angle.
If you want to establish a direction-dependent initial velocity for your bullets (e.g. always-firing-forward guns) then you just apply the rotation but not the translation to the velocity (gunVelX, gunVelY).
bulletVelX = cos(shipAngle)*gunVelX - sin(shipAngle)*gunVelY
bulletVelY = sin(shipAngle)*gunVelX + cos(shipAngle)*gunVelY
If you were to use vector and matrix math, you would be doing all the same calculations as here, but they would be bundled up in single objects rather than pairs of x's and y's and four trig functions. It can greatly simplify your code:
shipTransform = translate(shipX, shipY)*rotate(shipAngle)
bulletPos = shipTransform*gunPos
I've given the explicit formulas because knowing how the bare arithmetic works is useful to the conceptual understanding.
Response to edit:
In the code you edited into your question, you are adding what I assume is the ship position into the coordinates you multiply by sin/cos. Don't do that — just multiply the offset of the gun position from the ship center by sin/cos and only then add that to the ship position. Also, you are using x x; y y on the two lines, where you should be using x y; x y. Here is your code edited to fix those two things:
_bullet.x = this.x + Math.cos( StaticMath.ToRad(this.rotation)) * (-10) - Math.sin( StaticMath.ToRad(this.rotation)) * (+15);
_bullet.y = this.y + Math.sin( StaticMath.ToRad(this.rotation)) * (-10) + Math.cos( StaticMath.ToRad(this.rotation)) * (+15);
This is the code for a gun at offset (-10, 15).
I'm trying to render the "mount" scene from Eric Haines' Standard Procedural Database (SPD), but the refraction part just doesn't want to co-operate. I've tried everything I can think of to fix it.
This one is my render (with Watt's formula):
(source: philosoraptor.co.za)
This is my render using the "normal" formula:
(source: philosoraptor.co.za)
And this one is the correct render:
(source: philosoraptor.co.za)
As you can see, there are only a couple of errors, mostly around the poles of the spheres. This makes me think that refraction, or some precision error is to blame.
Please note that there are actually 4 spheres in the scene, their NFF definitions (s x_coord y_coord z_coord radius) are:
s -0.8 0.8 1.20821 0.17
s -0.661196 0.661196 0.930598 0.17
s -0.749194 0.98961 0.930598 0.17
s -0.98961 0.749194 0.930598 0.17
That is, there is a fourth sphere behind the more obvious three in the foreground. It can be seen in the gap left between these three spheres.
Here is a picture of that fourth sphere alone:
(source: philosoraptor.co.za)
And here is a picture of the first sphere alone:
(source: philosoraptor.co.za)
You'll notice that many of the oddities present in both my version and the correct version is missing. We can conclude that these effects are the result of interactions between the spheres, the question is which interactions?
What am I doing wrong? Below are some of the potential errors I've already considered:
Refraction vector formula.
As far as I can tell, this is correct. It's the same formula used by several websites and I verified the derivation personally. Here's how I calculate it:
double sinI2 = eta * eta * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - sqrt(1.0f - sinI2)));
transmit = transmit.normalise();
I found an alternate formula in 3D Computer Graphics, 3rd Ed by Alan Watt. It gives a closer approximation to the correct image:
double etaSq = eta * eta;
double sinI2 = etaSq * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - (sqrt(1.0f - sinI2) / etaSq)));
transmit = transmit.normalise();
The only difference is that I'm dividing by eta^2 at the end.
Total internal reflection.
I tested for this, using the following conditional before the rest of my intersection code:
if (sinI2 <= 1)
Calculation of eta.
I use a stack-like approach for this problem:
/* Entering object. */
if (r.normal.dot(r.dir) < 0)
{
double eta1 = r.iorStack.back();
double eta2 = m.ior;
eta = eta1 / eta2;
r.iorStack.push_back(eta2);
}
/* Exiting object. */
else
{
double eta1 = r.iorStack.back();
r.iorStack.pop_back();
double eta2 = r.iorStack.back();
eta = eta1 / eta2;
}
As you can see, this stores the previous objects that contained this ray in a stack. When exiting the code pops the current IOR off the stack and uses that, along with the IOR under it to compute eta. As far as I know this is the most correct way to do it.
This works for nested transmitting objects. However, it breaks down for intersecting transmitting objects. The problem here is that you need to define the IOR for the intersection independently, which the NFF file format does not do. It's unclear then, what the "correct" course of action is.
Moving the new ray's origin.
The new ray's origin has to be moved slightly along the transmitted path so that it doesn't intersect at the same point as the previous one.
p = r.intersection + transmit * 0.0001f;
p += transmit * 0.01f;
I've tried making this value smaller (0.001f) and (0.0001f) but that makes the spheres appear solid. I guess these values don't move the rays far enough away from the previous intersection point.
EDIT: The problem here was that the reflection code was doing the same thing. So when an object is reflective as well as refractive then the origin of the ray ends up in completely the wrong place.
Amount of ray bounces.
I've artificially limited the amount of ray bounces to 4. I tested raising this limit to 10, but that didn't fix the problem.
Normals.
I'm pretty sure I'm calculating the normals of the spheres correctly. I take the intersection point, subtract the centre of the sphere and divide by the radius.
Just a guess based on doing a image diff (and without reading the rest of your question). The problem looks to me to be the refraction on the back side of the sphere. You might be:
doing it backwards: e.g. reversing (or not reversing) the indexes of refraction.
missing it entirely?
One way to check for this would be to look at the mount through a cube that is almost facing the camera. If the refraction is correct, the picture should be offset slightly but otherwise un-altered. If it's not right, then the picture will seem slightly tilted.
So after more than I year, I finally figured out what was going on here. Clear minds and all that. I was completely off track with the formula. I'm instead using a formula by Heckbert now, which I am sure is correct because I proved it myself using geometry and discrete math.
Here's the correct vector calculation:
double c1 = v.dot(n) * -1;
double c1Sq = pow(c1, 2);
/* Heckbert's formula requires eta to be eta2 / eta1, so I have to flip it here. */
eta = 1 / eta;
double etaSq = pow(eta, 2);
if (etaSq + c1Sq >= 1)
{
Vector transmit = (v / eta) + (n / eta) * (c1 - sqrt(etaSq - 1 + c1Sq));
transmit = transmit.normalise();
...
}
else
{
/* Total internal reflection. */
}
In the code above, eta is eta1 (the IOR of the surface from which the ray is coming) over eta2 (the IOR of the destination surface), v is the incident ray and n is the normal.
There was another problem, which confused the problem some more. I had to flip the normal when exiting an object (which is obvious - I missed it because the other errors were obscuring it).
Lastly, my line of sight algorithm (to determine whether a surface is illuminated by a point light source) was not properly passing through transparent surfaces.
So now my images line up properly :)