Ray Box intersection - graphics

I was wondering if anyone knew of a good resource for ray-box intersection algorithms. I am writing a ray-tracing program and want to include a box-primitive object. To be specific, I need an algorithm that can return a "t value" for a ray parameterized as R = E + t*D where E is the starting point and D is the direction vector. I have already implemented a ray-box intersection that is useful for bounding boxes, but it only returns a boolean value for whether or not the box was hit. This is no good however since I need to be able to calculate the exact point in 3D space that the box was hit in order to be able to render it.

I assume you are interested only in axes aligned boxes. The code is available in LuxRays sources: https://bitbucket.org/luxrender/luxrays/src/ceb10f7963250be95af709f98633907c13da7830/src/luxrays/core/geometry/bbox.cpp?at=default&fileviewer=file-view-default#bbox.cpp-148

Related

Determing the direction of face normals consistently?

I'm a newbie to computer graphics so I apologize if some of my language is inexact or the question misses something basic.
Is it possible to calculate face normals correctly, given a list of vertices, and a list of faces like this:
v1: x_1, y_1, z_1
v2: x_2, y_2, z_2
...
v_n: x_n, y_n, z_n
f1: v1,v2,v3
f2: v4,v2,v5
...
f_m: v_j, v_k, v_l
Each x_i, y_i , z_i specifies the vertices position in 3d space (but isn't neccesarily a vector)
Each f_i contains the indices of the three vertices specifying it.
I understand that you can use the cross product of two sides of a face to get a normal, but the direction of that normal depends on the order and choice of sides (from what I understand).
Given this is the only data I have is it possible to correctly determine the direction of the normals? or is it possible to determine them consistently atleast? (all normals may be pointing in the wrong direction?)
In general there is no way to assign normal "consistently" all over a set of 3d faces... consider as an example the famous Möbius strip...
You will notice that if you start walking on it after one loop you get to the same point but on the opposite side. In other words this strip doesn't have two faces, but only one. If you build such a shape with a strip of triangles of course there's no way to assign normals in a consistent way and you'll necessarily end up having two adjacent triangles with normals pointing in opposite directions.
That said, if your collection of triangles is indeed orientable (i.e. there actually exist a consistent normal assignment) a solution is to start from one triangle and then propagate to neighbors like in a flood-fill algorithm. For example in Python it would look something like:
active = [triangles[0]]
oriented = set([triangles[0]])
while active:
next_active = []
for tri in active:
for other in neighbors(tri):
if other not in oriented:
if not agree(tri, other):
flip(other)
oriented.add(other)
next_active.append(other)
active = next_active
In CG its done by polygon winding rule. That means all the faces are defined so the points are in CW (or CCW) order when looked on the face directly. Then using cross product will lead to consistent normals.
However many meshes out there does not comply the winding rule (some faces are CW others CCW not all the same) and for those its a problem. There are two approaches I know of:
for simple shapes (not too much concave)
the sign of dot product of your face_normal and face_center-cube_center will tell you if the normal points inside or outside of the object.
if ( dot( face_normal , face_center-cube_center ) >= 0.0 ) normal_points_out
You can even use any point of face instead of the face center too. Anyway for more complex concave shapes this will not work correctly.
test if point above face is inside or not
simply displace center of face by some small distance (not too big) in normal direction and then test if the point is inside polygonal mesh or not:
if ( !inside( face_center+0.001*face_normal ) ) normal_points_out
to check if point is inside or not you can use hit test.
However if the normal is used just for lighting computations then its usage is usually inside a dot product. So we can use its abs value instead and that will solve all lighting problems regardless of the normal side. For example:
output_color = face_color * abs(dot(face_normal,light_direction))
some gfx apis have implemented this already (look for double sided materials or normals, turning them on usually use the abs value ...) For example in OpenGL:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);

BoundingBox Shape

In my Android mapping activity, I have a parallelogram shaped area that I want to tell if points (ie:LatLng) are inside. I've tried using the:
bounds = new LatLngBounds.Builder()
.include(latlngNW)
.include(latlngNE)
.include(latlngSW)
.include(latlngSE)
.build();
and later
if (bounds.contains(currentLatLng) {
.....
}
but it is not that accurate. Do I need to create equations for lines connecting the four corners?
Thanks in advance.
The LatLngBounds appears to create a box from the points included. Given the shape that I'm trying to monitor is a parallelogram, you do need to create equations for each of the edges of the shape and use if statements to determine which side of the line a point lies.
Not an easy solution!
If you wish to build a parallelogram-shaped bounding "box" from a collection of points, and you know the desired angles of the parallelogram's sides, your best bet is to probably define a 2d linear shear transform which will one of those angles to horizontal, and the other to vertical. One may then feed the transformed points into normal "bounding box" routines, and feed the corners of the resulting box through the inverse of the above transform to get a bounding parallelogram.
Note that this approach is generally only suitable for parallelograms, not trapezoids. There are a few special cases where it could be used to find bounding trapezoids [e.g. if the top and bottom were horizontal, and the sides were supposed to converge at a known point (x0-y0), one could map x' = (x-x0)/(y-y0)] but for many kinds of trapezoids, the trapezoid formed by inverse mapping the corners of a horizontal/vertical bounding rectangle may not properly bound the points that are supposed to be within it.

Raytracing object matrix transformations with bounding volume hierarchies

I've run into an interesting issue in my ray tracer that I am developing. The objects in my scene are stored in a bounding volume hierarchy. Each individual object is encapsulated in a bounding box at a leaf node of the hierarchy and has a matrix transformation associated with it.
Now, the way I have been taught to do matrix transformations of objects in raytracing is to transform each ray by the inverse of the object's matrix and then see if there is an intersection. In pseudocode (and without a bvh tree) it would look like this:
float minimum_distance = FLOAT_MAX;
Intersection closestHit = null;
for(each object in scene)
{
Matrix transform = object.transform();
Matrix inverse = transform.inverse();
Ray transRay = transformRay(eyeRay, inverse);
Intersection hit = CollisionTest(transRay, object);
if(intersectionFound)
{
if(hit.distance() < minimum_distance)
{
closestHit = hit;
}
}
}
Shade(closestHit);
Since there is no bounding structure to the scene objects, you can loop through each one and transform the ray by the matrix of each object to test. But now imagine the following scenario with a BVH tree:
ROOT
Left Box Right Box
| |
V V
object A object B
Now lets say we have an eyeRay that intersections only the right box. The ray will only check for intersections objects that are in the right box and completely ignore any objects in the left box (which is the main advantage of putting your objects in a hierarchy like this...as to avoid unnecessary checking).
However, object A in the left box has a scaling transform associated with it that when applied would stretch object A such that it crosses over into right box territory. If the ray were allowed to check itself against every object in the scene, you would find that when the inverse of the transform of object A is applied to the ray, an intersection would be found. However, the ray will never be allowed to do that transformation check because the untransformed object A sits squarely in the left box. Thus, the intersection will be missed and the result would be a partially rendered object A.
So my question is how would I resolve this issue? I don't want to give up using a bounding volume hierarchy but I don't see how it is compatible with the above algorithm of inverting the ray. Any thoughts?
First note, there are a number of BVH Tree specializations. The more common ones exclude intersection of bounding volumes from neighbor nodes (i.e. Left Box cannot intersect Right Box).
Now the whole point of a bounding volume is that it bounds the underlying object. So if that object has a transform that scales it past the bounding volume, then that means the bounding volume is NOT a proper bounding volume (BV).
Two ways to fix this, if the bounding volume was computed on untrasformed geometry, then when checking ray-boundingVolume intersection, first transform the BV to the same coordinate system as the object.
Depending on what you're doing, a more efficient way may be to compute the BV directly on the transformed object (Scaled, translated, etc) object. This way you don't need to transform the box (or the ray) when doing initial ray-BV check.
Hope that helps.

Dot Product - How does it help to define whether the light source hits my object or not?

For example the light source is coming from 1,3,-5 and object is at 4,-2,-1.
Algebraic formula is going to give the answer as 3. [1,3-5].[4,-2,-1]
= 1*4 + 3*-2 + -5*-1 = 3
But what does this 3 means? How do I know if my object is shaded with this number 3? Or is there more to it? I did look around and unable to find anything conclusive. Would be great if someone could give some insight. Thank you.
Judging from answers, pondering if I am understanding my question wrong. I was trying to get my head around the following question:
For a point on a convex surface, with the normal n=(n1,n2,n3)and light
direction l = (l1,l2,l3), determine if the point can be seen by light
source.
Using a dot product between two points makes no sense. Essentially, a dot product gives a measure of how similar two vectors are. When applied to points, the value will be related to the similarity of the direction to the points from the origin, as well as their distance from it. That metric doesn't make much sense, as you found out with that '3'.
To determine the amount of illumination, you want to be using a dot product between the normalized vector of the direction from the surface to the light and the surface normal. The result will be a value from -1 to 1, which you can interpret as an illumination factor for simple gouraud shading. In pseudocode:
illumination = max(0, dot(normalize(lightPosition - positionOnSurface), surfaceNormal))
Determining if a light hits an object is an entirely different problem called occlusion, and not really something you express in as mathematical formula. It's about testing what objects are in the path from the light to your target object.
The dot product can tell you on what side of a line a point is. The triangle is formed by three lines. If you are on the same side of all three lines then you are inside the triangle. You can use three dot products to test for each of the three sides. See slide 23 on this link http://comp575.web.unc.edu/files/2010/09/06raytracing1.pdf.

Intersecting points with a polygon in OpenCV

My inputs
I have a vector<Point2f> that contains the contours of a polygon. I also have a list of points that need to be intersected with this polygon.
The problem
I want to calculate how much of these points intersect with the polygon. I want to repeat this calculation on a number of polygons to see which one contains the highest number of points.
Does OpenCV implement such intersection functionality of its own or will I need to implement an intersection function myself? I'm worried that if I try to implement it myself, the result will be unnecessarily slow. If OpenCV can't do it, are there other free graphics libraries that can perform this task?
pointPolygonTest does exactly what you're looking for, and it's pretty well optimized. The parameter is a Mat which you can make with the constructor that takes your vector of points.
The function determines whether the point is inside a contour, outside, or lies on an edge (or coincides with a vertex). It returns positive (inside), negative (outside) or zero (on an edge) value, correspondingly. When measureDist=false , the return value is +1, -1 and 0, respectively. Otherwise, the return value it is a signed distance between the point and the nearest contour edge.
Your problem seems easily parallelizable, though, i.e. each batch of candidate polygons could run on a different thread, so I'd definitely look into that if you're concerned about performance.

Resources