Refraction Vector (Ray tracing) - graphics

I am doing ray tracing and I do the refraction of the ray using the following relation (I got it from PDF called "Reflections and Refractions in Ray Tracing"):
But I have seen it in another PDF as follows:
Could you please explain for me why?
And how can I reassure that my refraction vector that I calculated is correct?
Thanks

Assuming that your vectors are actually xyz triplets:
float3 reflect( float3 i, float3 n )
{
return i - 2.0 * n * dot(n,i);
}

There's a decidated (and nicely written!) introductory chapter to reflection and refraction formulas in the latest "Ray Tracing Gems 2" book; available for free on https://link.springer.com/book/10.1007/978-1-4842-7185-8 - see Chapter 8, by Eric Haines.

If you do the derivation yourself according the figure, where we have surface normal pointing in the opposite direction (dot product of incident ray and normal is negative), I think it is safe to say the front ones are correct. For the latter ones, it seems that the normal is flipped to the opposite side of the surface yet we calculate all the cosine terms w.r.t the new normal vector. So in this situation (notice that cos theta_i is negative now, w.r.t to the downward-pointing normal, we can substitute it by -cos(pi - theta_i)), we can actually get the equivalent formula in which the only difference is one more negative sign for the normal vector. So I think the contradiction is caused by the direction of the normal vector and the definition of incident angle.

Related

Physically Based Shading, IBL, Half Vector, and NDotR vs NDotV

I'm trying to figure out some simple concepts about image based lighting for PBR. In many documents and code, I've seen the light direction (FragToLightDir) being set to the reflection vector (reflect(EyeToFragDir,Normal)). Then they set the half vector to the mid-way point between the light and view direction: HalfVecDir = normalize(FragToLightDir+FragToEyeDir); But doesn't this just result in the half vector being identical to the surface normal? If so, this would mean that terms like NDotH are always 1.0. Is this correct?
Here is another source of confusion for me. I'm trying to implement specular cube maps from the app Lys, using their algorithm for generating the correct roughness value to use for mip-level sampling based on roughness (here: https://docs.knaldtech.com/doku.php?id=specular_lys#pre-convolved_cube_maps_vs_path_tracers in the section Pre-convolved Cube Maps vs Path Tracers). In this document, they ask us to use NDotR as a scalar. But what is this NDotR in respect to IBL? If it means dot(Normal,ReflectDir), then isn't that exactly equivalent to dot(Normal,FragToEyeDir)? If I use either of these dot product results, the final result is too glossy at grazing angles (when compared to their more simplistic conversion using BurleyToMipSimple()), which makes me think I'm misunderstanding something about this process. I've tested the algorithm using NDotH, and it looks correct, but isn't this simply canceling out the rest of the math, since NDotH==1.0? Here is my very simple function to extract the mip level using their suggested logic:
float computeSpecularCubeMipTest(float perc_ruf)
{
//float n_dot_r = dot( Normal, Reflect );
float specular_power = ( 2.0 / max( EPSILON, perc_ruf*perc_ruf*perc_ruf*perc_ruf ) ) - 2.0;
specular_power /= ( 4.0 * max( NDotR, EPSILON ) );
return sqrt( max( EPSILON, sqrt(2.0/( specular_power + 2.0 )))) * MipScaler;
}
I realize this is an esoteric subject. Since everyone is using popular game engines these days, no one is forced to understand this madness! But I appreciate any advice on how to go about this.
Edit: Just to make sure I'm clear, I'm referring to pure image based lighting, with no directional lights, no spot lights, etc. Just a cube map that lights the whole scene, similar to the lighting in apps like Substance Painter and Blender's Viewport shading mode.
I'm not familiar with this particular app, but it looks like you're on the right track here. Part of the advantage of pre-convoluting the cube maps is to customize each pixel to be the light source for a particular reflection vector, so indeed NdotV is identical to NdotR as you've noticed. The R still needs to be calculated, for the texture lookup, so it doesn't matter much which one you use for the dot. There is no such thing as H or NdotH used for IBL lookups; those are only for point lights.
If the grazing angles look wrong, perhaps there's a Fresnel term missing somewhere? Reflections start to work differently at those angles.
For what it's worth, the glTF Sample Viewer is using NdotV for its specular IBL lookup.

Determing the direction of face normals consistently?

I'm a newbie to computer graphics so I apologize if some of my language is inexact or the question misses something basic.
Is it possible to calculate face normals correctly, given a list of vertices, and a list of faces like this:
v1: x_1, y_1, z_1
v2: x_2, y_2, z_2
...
v_n: x_n, y_n, z_n
f1: v1,v2,v3
f2: v4,v2,v5
...
f_m: v_j, v_k, v_l
Each x_i, y_i , z_i specifies the vertices position in 3d space (but isn't neccesarily a vector)
Each f_i contains the indices of the three vertices specifying it.
I understand that you can use the cross product of two sides of a face to get a normal, but the direction of that normal depends on the order and choice of sides (from what I understand).
Given this is the only data I have is it possible to correctly determine the direction of the normals? or is it possible to determine them consistently atleast? (all normals may be pointing in the wrong direction?)
In general there is no way to assign normal "consistently" all over a set of 3d faces... consider as an example the famous Möbius strip...
You will notice that if you start walking on it after one loop you get to the same point but on the opposite side. In other words this strip doesn't have two faces, but only one. If you build such a shape with a strip of triangles of course there's no way to assign normals in a consistent way and you'll necessarily end up having two adjacent triangles with normals pointing in opposite directions.
That said, if your collection of triangles is indeed orientable (i.e. there actually exist a consistent normal assignment) a solution is to start from one triangle and then propagate to neighbors like in a flood-fill algorithm. For example in Python it would look something like:
active = [triangles[0]]
oriented = set([triangles[0]])
while active:
next_active = []
for tri in active:
for other in neighbors(tri):
if other not in oriented:
if not agree(tri, other):
flip(other)
oriented.add(other)
next_active.append(other)
active = next_active
In CG its done by polygon winding rule. That means all the faces are defined so the points are in CW (or CCW) order when looked on the face directly. Then using cross product will lead to consistent normals.
However many meshes out there does not comply the winding rule (some faces are CW others CCW not all the same) and for those its a problem. There are two approaches I know of:
for simple shapes (not too much concave)
the sign of dot product of your face_normal and face_center-cube_center will tell you if the normal points inside or outside of the object.
if ( dot( face_normal , face_center-cube_center ) >= 0.0 ) normal_points_out
You can even use any point of face instead of the face center too. Anyway for more complex concave shapes this will not work correctly.
test if point above face is inside or not
simply displace center of face by some small distance (not too big) in normal direction and then test if the point is inside polygonal mesh or not:
if ( !inside( face_center+0.001*face_normal ) ) normal_points_out
to check if point is inside or not you can use hit test.
However if the normal is used just for lighting computations then its usage is usually inside a dot product. So we can use its abs value instead and that will solve all lighting problems regardless of the normal side. For example:
output_color = face_color * abs(dot(face_normal,light_direction))
some gfx apis have implemented this already (look for double sided materials or normals, turning them on usually use the abs value ...) For example in OpenGL:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);

RayTracing: When to Normalize a vector?

I am rewriting my ray tracer and just trying to better understand certain aspects of it.
I seem to have down pat the issue regarding normals and how you should multiply them by the inverse of the transpose of a transformation matrix.
What I'm confused about is when I should be normalizing my direction vectors?
I'm following a certain book and sometimes it'll explicitly state to Normalize my vector and other cases it doesn't and I find out that I needed to.
Normalized vector is in the same direction with just unit length 1? So I'm unclear when it is necessary?
Thanks
You never need to normalize a vector unless you are working with the angles between vectors, or unless you are rotating a vector.
That's it.
In the former case, all of your trig functions require your vectors to land on a unit circle, which means the vectors are normalized. In the latter case, you are dividing out the magnitude, rotating the vector, making sure it stays a unit, and then multiplying the magnitude back in. Normalization just goes with the territory.
If someone tells you that coordinate system are defined by n unit vectors, know that i-hat, j-hat, k-hat, and so on can be any arbitrary vector(s) of any length and direction, so long as none of them are parallel. This is the heart of affine transformations.
If someone tries to tell you that the dot product requires normalized vectors, shake your head and smile. The dot product only needs normalized vectors when you are using it to get the angle between two vectors.
But doesn't normalization make the math "simpler"?
Not really -- It adds a magnitude computation and a division. Numbers between 0..1 are no different than numbers between 0..x.
Having said that, you sometimes normalize in order to play well with others. But if you find yourself normalizing vectors as a matter of principle before calling methods, consider using a flag attached to the vector to save yourself a step. Mathematically, it is unimportant, but practically, it can make a huge difference in performance.
So again... it's all about rotating a vector or measuring its angle against another vector. If you aren't doing that, don't waste cycles.
tl;dr: Normalized vectors simplify your math. They also reduce the number of very hard to diagnose visual artifacts in your images.
Normalized vector is in the same direction with just unit length 1? So
I'm unclear when it is necessary?
You almost always want all vectors in a ray tracer to be normalized.
The simplest example is that of the intersection test: where does a bouncing ray hit another object.
Consider a ray where:
p(t) = p_0 + v * t
In this case, a point anywhere along that ray p(t) is defined as an offset from the original point p_0 and an offset along a particular direction v. For every increment of parameter t, the resulting p(t) will move another increment of length equal to the length of the vector v.
Remember, you know p_0 and v. When you are trying to find the point where this ray next hits another object, you have to solve for that t. It is obviously more convenient, if not always obviously necessary, to use normalized vector vs in that representation.
However, that same vector v is used in lighting calculations. Imagine that we have another direction vector u that points towards a lighting source. For the purpose of a very simple shading model, we can define the light at a particular point to be the dot product between those two vectors:
L(p) = v * u
Admittedly, this is a very uninteresting reflection model but it captures the high points of the discussion. A spot on a surface is bright if reflection points towards the light and dim if not.
Now, remember that another way of writing this dot product is the product of the magnitudes of the vectors times the cosine of the angle between them:
L(p) = ||v|| ||u|| cos(theta)
If u and v are of unit length (normalized), then the equation will evaluate to be proportional to the angle between the two vectors. However, if v is not of unit length, say because you didn't bother to normalize after reflecting the vector in the ray model above, now your lighting model has a problem. Spots on the surface using a larger v will be much brighter than spots that do not.
It is necessary to normalize a direction vector whenever you use it in some math that is influenced by its length.
The prime example is the dot product, which is used in most lighting equations. You also sometimes need to normalize vectors that you use in lighting calculations, even if you believe that they are normal.
For example, when using an interpolated normal on a triangle. Common sense tells you that since the normals at the vertices are normal, the vectors you get by interpolating are too. So much for common sense... the truth is that they will be shorter unless they incidentially all point into the same direction. Which means that you will shade the triangle too dark (to make matters worse, the effect is more pronounced the closer the light source gets to the surface, which is a... very funny result).
Another example where a vector might or might not be normalized is the cross product, depending on what you are doing. For example, when using the two cross products to build an orthonormal base, then you must at least normalize once (though if you do it naively, you end up doing it more often).
If you only care about the direction of the resulting "up vector", or about the sign, you don't need to normalize.
I'll answer the opposite question. When do you NOT need to normalize? Almost all calculations related to lighting require unit vectors - the dot product then gives you the cosine of the angle between vectors which is really useful. Some equations can still cope but become more complex (essentially doing the normalization in the equation) That leaves mostly intersection tests.
Equations for many intersection tests can be simplified if you have unit vectors. Some do not require it - for example if you have a plane equation (with a unit normal) you can find the ray-plane intersection without normalizing the ray direction vector. The distance will be in terms of the ray direction vectors length. This might be OK if all you want is to intersect a bunch of those planes (the relative distances will all be correct). But as soon as you want to compare with a different distance - calculated using the normalized ray direction - the distance values will not compare properly.
You might think about normalizing a direction vector AFTER doing some work that does not require it - maybe you have an acceleration structure that can be traversed without a normalized vector. But that isn't relevant either because eventually the ray will hit something and you're going to want to do a lighting/shading calculation with it. So you may as well normalize them from the start...
In other words, any specific calculation may not require a normalized direction vector, but a given direction vector will almost certainly need to be normalized at some point in the process.
Vectors are used to store two conceptually different elements: points in space and directions:
If you are storing a point in space (for example the position of the camera, the origin of the ray, the vertices of triangles) you don't want to normalize, because you would be modifying the value of the vector, and losing the specific position.
If you are storing a direction (for example the camera up, the ray direction, the object normals) you want to normalize, because in this case you are interested not in the specific value of the point, but on the direction it represents, so you don't need the magnitude. Normalization is useful in this case because it simplifies some operations, such as calculating the cosine of two vectors, something that can be done with a dot product if both are normalized.

Direct3D Geometry: Rotation Matrix from Two Vectors

Given two 3D vectors A and B, I need to derive a rotation matrix which rotates from A to B.
This is what I came up with:
Derive cosine from acos(A . B)
Derive sine from asin(|A x B| / (|A| * |B|))
Use A x B as axis of rotation
Use matrix given near the bottom of this page (axis angle)
This works fine except for rotations of 0° (which I ignore) and 180° (which I treat as a special case). Is there a more graceful way to do this using the Direct3D library? I am looking for a Direct3D specific answer.
Edit: Removed acos and asin (see Hugh Allen's post)
No, you're pretty much doing it the best way possible. I don't think there is a built-in DirectX function that does what you want. For step 4, you can use D3DXMatrixRotationAxis(). Just be careful about the edge cases, such as when |A| or |B| is zero, or when the angle is 0° or 180°.
It's probably more of a typo than a thinko, but acos(A.B) is the angle, not its cosine. Similarly for point 2.
You can calculate the sin from the cos using sin^2 + cos^2 = 1. That is, sin = sqrt(1-cos*cos). This would be cheaper than the vector expression you are using, and also eliminate the special cases for 0/180 degrees.
You might look at the following article from siggraph link text
Maybe you can use D3DXMatrixLookAtLH ?

I need an algorithm for rendering soft paint brush strokes

I have an array of mouse points, a stroke width, and a softness. I can draw soft circles and soft lines. Which algorithm should I use for drawing my array of points? I want crossed lines to look nice as well as end points.
I would definitely choose the Bezier for that purpose, and in particular I will implement the piecewise cubic Bezier - it is truly easy to implement and grasp and it is widely used by 3D Studio max and Photoshop.
Here is a good source for it:
http://local.wasp.uwa.edu.au/~pbourke/surfaces_curves/bezier/cubicbezier.html
Assuming that you have an order between the points, in order to set the four control points you should go as follows:
I define the tangent between point P[i] and point P[i+1]
T1 = (P[i+1] - P[i-1])
T2 = (P[i+2] - P[i])
And to create the piecewise between two points I do the following:
Control Point Q1: P[i]
Control Point Q2: the point lying along the tangent from Q1 => Q1 + 0.3T1
Control Point Q3: the point lying along the tangent to Q4 => Q4 - 0.3T2
Control Point Q4: P[i+1]
The reason I chose 0.3T is arbitrary in order to give it enough 'strength' but not too much, you can use more elaborated methods that will take care of acceleration (C2 continuity) as well.
Enjoy
Starting from Gooch & Gooch's Non-Photorealistic Rendering, you might find Pham's work useful - see PDF explaining algorithm.
There's a nice overview article by Tateosian which explains the additional techniques in less detail with pretty pictures.Bezier curve drawing alone doesn't produce the effects you want (depending on how fancy you want to get). However, I'd certainly start with Paul's work and see if just using that to draw with your soft brush is good enough.
Be warned there are lots of patents in this space, sigh.
I think maybe you're looking for a spline algorithm.
Here is a spline tutorial, which you might find helpfull:
[http://www.doc.ic.ac.uk/~dfg/AndysSplineTutorial/index.html]
The subject is also covered in most books on graphics programming.
Cheers.
I figured it out - use a very soft gradient circle, draw repeatedly to make a stroke, blend using multiply.

Resources