Strange artifacts when ray casting a volume - graphics

So I am writing a volume ray caster (for the first time ever) in Java, learning from the code of the great VTK toolkit written in C.
Everything works almost exactly like VTK, except I get this strange artifacts, looking like elevation lines on the volume. I've noticed that VTK also shows them when manipulating the image, but they disappear when the image is static.
I've looked though the code multiple times, and can't find the source of the artifacts. Maybe it is something simple a computer graphics expert knows from the top of his head? :)
More info on my implementation
I am using the gradient method for normal calculations (a standard from what I've found on the internet)
I am using trilinear interpolation for ray point values
This "elevation line" artifacts look like value rounding errors, but I can't find any in my code
Increasing the resolution of the render does not solve the problem
The artifacts do not seem to be "facing" any fixed direction, like the camera position
I'm not attaching the code since it is huge :)
EDIT (ray composite loop)
while (Geometry.pointInsideCuboid(cuboid, position) && result.a > MINIMAL_OPACITY) {
if (currentVoxel.notEquals(previousVoxel)) {
final float value = VoxelUtils.interpolate(position, voxels, buffer);
color = colorLUT.getColor(value);
opacity = opacityLUT.getOpacityFromLut(value);
if (enableShading) {
final Vector3D normal = VoxelUtils.getNormal(position, voxels, buffer);
final float cos = normal.dot(light.fixedDirection);
final float gradientOpacity = cos < 0 ? 0 : cos;
opacity *= gradientOpacity;
if(cos > 0)
color = color.clone().shade(cos, colorLUT.diffuse, colorLUT.specular);
}
previousVoxel.setTo(currentVoxel);
}
if(opacity > 0)
result.accumulate(color, opacity);
position.add(rayStep);
currentVoxel.fromVector(position);
}

Related

Metal Shading Language Texture Will Not Clear

In my scene I have 2 objects, a Water Well and a Quad. The Well has 2 textures, a baseColorTexture (index 0) and a normalMapTexture (index 1). The quad has no textures applied to it. When Rendering the scene I get something that looks like this.
THE QUAD IS USING THE WELLS IMAGE. Now when looking at the debugger I find that there is no image bound to either index 0 or 1 for the quad like the picture shows below.
My Shaders use the following when using a texture. if(!is_null_texture(texture) { ... }
Can anyone please give me an idea as to what may be occuring?
You can see all the code here:
github.com/twohyjr/Metal-Game-Engine-Tutorial/blob/master/….
but it looks like this.
float4 color = material.color; if(!is_null_texture(baseColorMap)) { color = baseColorMap.sample(sampler2d, texCoord); }

convert between image coordinates (i-j-k) and world coordinates (x-y-z) vtk in C#

Does anyone knows how can I convert from image coordinates acquired like this:
private void renderWindowControl1_Click(object sender, System.EventArgs e)
{
int[] lastPos = this.renderWindowControl1.RenderWindow.GetInteractor().GetLastEventPosition();
Z1TxtBox.Text = (_Slice1 + 1).ToString();
X1TxtBox.Text = lastPos[0].ToString();
Y1TxtBox.Text = (512 - lastPos[1]).ToString();
}
into physical coordinates.
TX Tal
VTK may have an elegant method call, but in general you will need to use the information in your image's image plane module (specifically Equation C.7.6.2.1-1).
http://dicom.nema.org/medical/dicom/current/output/html/part03.html#sect_C.7.6.2
in order to convert between a click and physical location:
There is some insights I got from working on this project:
int[] lastPos = this.renderWindowControl1.RenderWindow.GetInteractor().GetLastEventPosition();
returns the pixel location of the click in the control. It is a problem because if the user zooms in, lastPos does not represent the location in the dicom.
The solution I have found, was to use vtkPropPicker class. Code example can be found here and here.
image_coordinate are in world coordinates but without the origin offset. which mean, that:
1. if we want to get the pixel location (in 512x512 grid): the x,y value should be normalized by pixel spacing, and image orientation. the value of these parameters can be acquired using the equation mentioned in the answer above me Equation C.7.6.2.1-1.
vtkDICOMImageReader _reader;
reader.GetPixelSpacing();
reader.GetImageOrientationPatient();
If we need world physical location, we should add the origin offset for x and y:
reader.GetDataOrigin();
As for Z axis: I didn't need it, so I am not sure.
That is my dime on the matter, maybe there are some more elegant ways, I haven't found them.

Ray Trace Refraction looks "fake"

So in my ray tracer that I'm building I've gotten refraction to work for spheres along with caustic effects, however the glass spheres don't look particularly good. I believe the refracted math is correct because the light appears to be bending in inverting the way you'd expect it to however it doesn't look like glass, it just looks like paper or something.
I've read that total internal reflection is responsible for much of what makes glass look like it does, however when I tested to see if any of my refracted rays go above the critical angle, none of them do, so my glass sphere has no total internal reflection. I'm not sure if this is normal or if I've done something wrong. I've posted my refraction code below so if anyone has any suggestions I'd love to hear them.
/*
* Parameter 'dir' lets you know whether the ray is starting
* from outside the sphere going in (0) or from in the sphere
* going back out (1).
*/
void Ray::Refract(Intersection *hit, int dir)
{
float n1, n2;
if(dir == 0){ n1 = 1.0; n2 = hit->mat->GetRefract(); }
if(dir == 1){ n1 = hit->mat->GetRefract(); n2 = 1.0; }
STVector3 N = hit->normal/hit->normal.Length();
if(dir == 1) N = -N;
STVector3 V = D/D.Length();
double c1 = -STVector3::Dot(N, V);
double n = n1/n2;
double c2 = sqrt(1.0f - (n*n)*(1.0f - (c1*c1)));
STVector3 Rr = (n * V) + (n * c1 - c2) * N;
/*These are the parameters of the current ray being updated*/
E = hit->point; //Starting point
D = Rr; //Direction
}
This method is called during my main ray-tracing method RayTrace() which runs recursively. Here's a small section of it below that is in charge of refraction:
if (hit->mat->IsRefractive())
{
temp.Refract(hit, dir); //Temp is my ray that is being refracted
dir++;
result += RayTrace(temp, dir); //Result is the return RGB value.
}
You're right, you'll never get total internal reflection in a sphere (seen from the outside). That's because of symmetry: a ray inside a sphere will hit the surface at the same angle at both ends, meaning that, if it exceeds the critical angle at one end, it would have to exceed it at the other one as well (and so would not have been able to enter the sphere from the outside in the first place).
However, you'll still get a lot of partial reflection according to Fresnel's law. It doesn't look like your code accounts for that, which is probably why your glass looks fake. Try including that and see if it helps.
(Yes, it means that your rays will split in two whenever they hit a refractive surface. That happens in reality, so you'll just have to live with it. You can either trace both paths, or, if you're using a randomized ray tracing algorithm anyway, just sample one of them randomly with the appropriate weights.)
Ps. If you don't want to deal with stuff like light polarization, you may want to just use Schlick's approximation to the Fresnel equations.

How does inkscape calculate the coordinates for control points for "smooth edges"?

I am wondering what algorithm (or formula) Inkscape uses to calculate the control points if the nodes on a path are made "smooth".
That is, if I have a path with five nodes whose d attribute is
M 115.85065,503.57451
49.653441,399.52543
604.56143,683.48319
339.41126,615.97628
264.65997,729.11336
And I change the nodes to smooth, the d attribute is changed to
M 115.85065,503.57451
C 115.85065,503.57451 24.747417,422.50451
49.653441,399.52543 192.62243,267.61777 640.56491,558.55577
604.56143,683.48319 580.13686,768.23328 421.64047,584.07809
339.41126,615.97628 297.27039,632.32348 264.65997,729.11336
264.65997,729.11336
Obviously, Inkscape calculates the control point coordinates (second last and last coordinate pair on lines on or after C). I am interested in the algorithm Inkscape uses for it.
I have found the corresponding piece of code in Inkscape's source tree under
src/ui/tool/node.cpp, method Node::_updateAutoHandles:
void Node::_updateAutoHandles()
{
// Recompute the position of automatic handles.
// For endnodes, retract both handles. (It's only possible to create an end auto node
// through the XML editor.)
if (isEndNode()) {
_front.retract();
_back.retract();
return;
}
// Auto nodes automaticaly adjust their handles to give an appearance of smoothness,
// no matter what their surroundings are.
Geom::Point vec_next = _next()->position() - position();
Geom::Point vec_prev = _prev()->position() - position();
double len_next = vec_next.length(), len_prev = vec_prev.length();
if (len_next > 0 && len_prev > 0) {
// "dir" is an unit vector perpendicular to the bisector of the angle created
// by the previous node, this auto node and the next node.
Geom::Point dir = Geom::unit_vector((len_prev / len_next) * vec_next - vec_prev);
// Handle lengths are equal to 1/3 of the distance from the adjacent node.
_back.setRelativePos(-dir * (len_prev / 3));
_front.setRelativePos(dir * (len_next / 3));
} else {
// If any of the adjacent nodes coincides, retract both handles.
_front.retract();
_back.retract();
}
}
I'm not 100% sure of the quality of this information.
But at least at some point in time for calculating some curves
inkscape seems to have used >>spiro<<.
http://www.levien.com/spiro/
Take a quick look at the page, he's providing a link to his PhD-thesis:
http://www.levien.com/phd/thesis.pdf
in which he's introducing the theory/algorithms ...
Cheers
EDIT:
I'm currently investigating a bit into the matter for a similar purpose, so I stumbled across ...
http://www.w3.org/TR/SVG11/paths.html#PathDataCurveCommands ... the specification of curves for SVG.
So curves, like not circles or arcs, are cubic or quadratic beziers then ...
Have a look at wikipedia for bezier formulas as well:
http://en.wikipedia.org/wiki/B-spline#Uniform_quadratic_B-spline

Why won't my raytracer recreate the "mount" scene?

I'm trying to render the "mount" scene from Eric Haines' Standard Procedural Database (SPD), but the refraction part just doesn't want to co-operate. I've tried everything I can think of to fix it.
This one is my render (with Watt's formula):
(source: philosoraptor.co.za)
This is my render using the "normal" formula:
(source: philosoraptor.co.za)
And this one is the correct render:
(source: philosoraptor.co.za)
As you can see, there are only a couple of errors, mostly around the poles of the spheres. This makes me think that refraction, or some precision error is to blame.
Please note that there are actually 4 spheres in the scene, their NFF definitions (s x_coord y_coord z_coord radius) are:
s -0.8 0.8 1.20821 0.17
s -0.661196 0.661196 0.930598 0.17
s -0.749194 0.98961 0.930598 0.17
s -0.98961 0.749194 0.930598 0.17
That is, there is a fourth sphere behind the more obvious three in the foreground. It can be seen in the gap left between these three spheres.
Here is a picture of that fourth sphere alone:
(source: philosoraptor.co.za)
And here is a picture of the first sphere alone:
(source: philosoraptor.co.za)
You'll notice that many of the oddities present in both my version and the correct version is missing. We can conclude that these effects are the result of interactions between the spheres, the question is which interactions?
What am I doing wrong? Below are some of the potential errors I've already considered:
Refraction vector formula.
As far as I can tell, this is correct. It's the same formula used by several websites and I verified the derivation personally. Here's how I calculate it:
double sinI2 = eta * eta * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - sqrt(1.0f - sinI2)));
transmit = transmit.normalise();
I found an alternate formula in 3D Computer Graphics, 3rd Ed by Alan Watt. It gives a closer approximation to the correct image:
double etaSq = eta * eta;
double sinI2 = etaSq * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - (sqrt(1.0f - sinI2) / etaSq)));
transmit = transmit.normalise();
The only difference is that I'm dividing by eta^2 at the end.
Total internal reflection.
I tested for this, using the following conditional before the rest of my intersection code:
if (sinI2 <= 1)
Calculation of eta.
I use a stack-like approach for this problem:
/* Entering object. */
if (r.normal.dot(r.dir) < 0)
{
double eta1 = r.iorStack.back();
double eta2 = m.ior;
eta = eta1 / eta2;
r.iorStack.push_back(eta2);
}
/* Exiting object. */
else
{
double eta1 = r.iorStack.back();
r.iorStack.pop_back();
double eta2 = r.iorStack.back();
eta = eta1 / eta2;
}
As you can see, this stores the previous objects that contained this ray in a stack. When exiting the code pops the current IOR off the stack and uses that, along with the IOR under it to compute eta. As far as I know this is the most correct way to do it.
This works for nested transmitting objects. However, it breaks down for intersecting transmitting objects. The problem here is that you need to define the IOR for the intersection independently, which the NFF file format does not do. It's unclear then, what the "correct" course of action is.
Moving the new ray's origin.
The new ray's origin has to be moved slightly along the transmitted path so that it doesn't intersect at the same point as the previous one.
p = r.intersection + transmit * 0.0001f;
p += transmit * 0.01f;
I've tried making this value smaller (0.001f) and (0.0001f) but that makes the spheres appear solid. I guess these values don't move the rays far enough away from the previous intersection point.
EDIT: The problem here was that the reflection code was doing the same thing. So when an object is reflective as well as refractive then the origin of the ray ends up in completely the wrong place.
Amount of ray bounces.
I've artificially limited the amount of ray bounces to 4. I tested raising this limit to 10, but that didn't fix the problem.
Normals.
I'm pretty sure I'm calculating the normals of the spheres correctly. I take the intersection point, subtract the centre of the sphere and divide by the radius.
Just a guess based on doing a image diff (and without reading the rest of your question). The problem looks to me to be the refraction on the back side of the sphere. You might be:
doing it backwards: e.g. reversing (or not reversing) the indexes of refraction.
missing it entirely?
One way to check for this would be to look at the mount through a cube that is almost facing the camera. If the refraction is correct, the picture should be offset slightly but otherwise un-altered. If it's not right, then the picture will seem slightly tilted.
So after more than I year, I finally figured out what was going on here. Clear minds and all that. I was completely off track with the formula. I'm instead using a formula by Heckbert now, which I am sure is correct because I proved it myself using geometry and discrete math.
Here's the correct vector calculation:
double c1 = v.dot(n) * -1;
double c1Sq = pow(c1, 2);
/* Heckbert's formula requires eta to be eta2 / eta1, so I have to flip it here. */
eta = 1 / eta;
double etaSq = pow(eta, 2);
if (etaSq + c1Sq >= 1)
{
Vector transmit = (v / eta) + (n / eta) * (c1 - sqrt(etaSq - 1 + c1Sq));
transmit = transmit.normalise();
...
}
else
{
/* Total internal reflection. */
}
In the code above, eta is eta1 (the IOR of the surface from which the ray is coming) over eta2 (the IOR of the destination surface), v is the incident ray and n is the normal.
There was another problem, which confused the problem some more. I had to flip the normal when exiting an object (which is obvious - I missed it because the other errors were obscuring it).
Lastly, my line of sight algorithm (to determine whether a surface is illuminated by a point light source) was not properly passing through transparent surfaces.
So now my images line up properly :)

Resources