Finding the intersection(s) between two angle ranges / segments - geometry

We have two angle ranges, (aStart, aSweep) and (bStart, bSweep), where the start is the place of the start of the angle segment in the range [0, 2π), and sweep is the size of the segment, in the range (0, 2π].
We want to find all of the angle ranges where these two angle ranges overlap, if there are any.
We need a solution that covers at least three kinds of situations:
But the number of cases increases as we confront the reality of the Devil Line that exists at angle = 0, which messes up all of the inequalities whenever either of the angle ranges cross it.

This solution works by normalising the angles to said Devil Line, so that one of the angles (which we call the origin angle) always starts there. It turns out that this makes the rest of the procedure extremely simple.
const float TPI = 2*M_PI;
//aStart and bStart must be in [0, 2PI)
//aSweep and bSweep must be in (0, 2PI]
//forInterval(float start, float sweep) gets called on each intersection found. It is possible for there to be zero, one, or two, you see, so it's not obvious how we would want to return an answer. We leave it abstract.
//only reports overlaps, not contacts (IE, it shouldn't report any overlaps of zero span)
template<typename F>
void overlappingSectors(float aStart, float aSweep, float bStart, float bSweep, F forInterval){
//we find the lower angle and work relative to it
float greaterAngle;
float greaterSweep;
float originAngle;
float originSweep;
if(aStart < bStart){
originAngle = aStart;
originSweep = aSweep;
greaterSweep = bSweep;
greaterAngle = bStart;
}else{
originAngle = bStart;
originSweep = bSweep;
greaterSweep = aSweep;
greaterAngle = aStart;
}
float greaterAngleRel = greaterAngle - originAngle;
if(greaterAngleRel < originSweep){
forInterval(greaterAngle, min(greaterSweep, originSweep - greaterAngleRel));
}
float rouno = greaterAngleRel + greaterSweep;
if(rouno > TPI){
forInterval(originAngle, min(rouno - TPI, originSweep));
}
}

Related

How to identify unit vectors with angle threshold quickly?

I am writing a program about computing geometry.
In this program, I need to identify unit vectors. (The word identify maybe not accurate)
i.e., writing a program to check whether a unit vector already exists.
This procedure is used when checking whether two polygons are on one plane. The first step is to check whether normal of two polygons are very close (angle < 1.0 degree).
So, we can assume that
all vectors are unit vectors
vectors are random
For example, set the angle threshold to 1.0 degree. And we have 6 vectors.
(1,0,0)
(0,1,0)
(1,0,1e-8) // in program, this will be normalized
(1,0,0)
(sin(45), cos(45),0)
(sin(44.9), cos(44.9),0)
then, the index of each vector is
0 1 0 0 2 2
i.e., the 1st / 3rd / 4th vectors are the same one because their angle is within 1.0 degree or just the same direction. angle between the 5th/6th vector is smaller than 1.0 degree.
Now, the problem comes, I have hundreds of thousands unit vectors to identify in different stages. This procedure costs about half of total time.
example code
std::vector<Vector3d> unitVecs; // all unit vectors
// more than 100,000 unit vectors in real case
int getVectorID(const Vector3d& vec)
{
for(int i=0; i<unitVecs.size(); ++i) {
if(calcAngle(unitVecs[i], vec) <1.0) // 1.0 is angle degree threshold
return i;
/// alternatively, check with cos value
if(unitVecs[i].dot(vec)>cos(1.0*RADIAN))
return i;
}
return -1;
}
int insertVector(const Vector3d& vec)
{
int idx = getVectorID(vec);
if(idx!=-1) return idx;
unitVecs.push_back(vec);
return unitVecs.size()-1;
}
Does anyone have good ideas to accelerate this process ?
If you are able to accept vectors which are merely "very close to being unit vectors", as opposed to vectors which are strictly less than or equal to 1 degree from being a unit vector, you can simply check that for a given vector 3 values are very close to 0, and one value is very close to 1:
int valueCloseTo(float value, float trg, float epsilon=0.0001) {
return abs(value) - trg <= epsilon;
}
int isRoughlyUnitVector(float x, float y, float z, float epsilon=0.0001) {
// We can quickly return false if units don't add near 1
// Could also consider multiplying `epsilon` x 3 here to account for accumulated error
if (!valueCloseTo(x + y + z, 1, epsilon)) return false;
// Now ensure that of x, y, and z, two are ~0 and one is ~1
int numZero = 0;
int numOne = 0;
std::vector<float> vec{ x, y, z };
for (float v : vec) {
// Count another ~0 value
if (valueCloseTo(v, 0, epsilon)) numZero++;
// Count another ~1 value
else if (valueCloseTo(v, 1, epsilon)) numOne++;
// If any value isn't close to 0 or 1, (x,y,z) is not a unit vector
else return false;
// False if we exceed 2 values near 0, and one value near 1
if (numZero > 2 || numOne > 1) return false;
}
return true;
}
Note that this method does not give any way to define a "maximum offset angle" (like 1deg in your question) - instead it lets us work with an epsilon value, which isn't an angle but rather a simple linear value. As epsilon increases vectors that are further from being unit vectors get accepted, but epsilon doesn't have an "angular" nature to it.

Find intersection point ray/triangle in a right-hand coordinate system

I would like to get the intersection point of a line (defined by a vector and origin) on a triangle.
My engine use right handed coordinate system, so X pointing forward, Y pointing left and Z pointing up.
---- Edit ----
With Antares's help, I convert my points to engine space with:
p0.x = -pt0.y;
p0.y = pt0.z;
p0.z = pt0.x;
But I don't know how to do the same with the direction vector.
I use the function from this stackoverflow question, original poster use this tutorial.
First we look for the distance t from origin to intersection point, in order to find its coordinates.
But I've got a negative t, and code return true when ray is outside the triangle. I set it outside visualy.
It return sometime false when I'm in the triangle.
Here is the fonction I use to get the intersection point, I already checked that it works, with 'classic' values, as in the original post.
float kEpsilon = 0.000001;
V3f crossProduct(V3f point1, V3f point2){
V3f vector;
vector.x = point1.y * point2.z - point2.y * point1.z;
vector.y = point2.x * point1.z - point1.x * point2.z;
vector.z = point1.x * point2.y - point1.y * point2.x;
return vector;
}
float dotProduct(V3f dot1, V3f dot2){
float dot = dot1.x * dot2.x + dot1.y * dot2.y + dot1.z * dot2.z;
return dot;
}
//orig: ray origin, dir: ray direction, Triangle vertices: p0, p1, p2.
bool rayTriangleIntersect(V3f orig, V3f dir, V3f p0, V3f p1, V3f p2){
// compute plane's normal
V3f p0p1, p0p2;
p0p1.x = p1.x - p0.x;
p0p1.y = p1.y - p0.y;
p0p1.z = p1.z - p0.z;
p0p2.x = p2.x - p0.x;
p0p2.y = p2.y - p0.y;
p0p2.z = p2.z - p0.z;
// no need to normalize
V3f N = crossProduct(p0p1, p0p2); // N
// Step 1: finding P
// check if ray and plane are parallel ?
float NdotRayDirection = dotProduct(N, dir); // if the result is 0, the function will return the value false (no intersection).
if (fabs(NdotRayDirection) < kEpsilon){ // almost 0
return false; // they are parallel so they don't intersect !
}
// compute d parameter using equation 2
float d = dotProduct(N, p0);
// compute t (equation P=O+tR P intersection point ray origin O and its direction R)
float t = -((dotProduct(N, orig) - d) / NdotRayDirection);
// check if the triangle is in behind the ray
//if (t < 0){ return false; } // the triangle is behind
// compute the intersection point using equation
V3f P;
P.x = orig.x + t * dir.x;
P.y = orig.y + t * dir.y;
P.z = orig.z + t * dir.z;
// Step 2: inside-outside test
V3f C; // vector perpendicular to triangle's plane
// edge 0
V3f edge0;
edge0.x = p1.x - p0.x;
edge0.y = p1.y - p0.y;
edge0.z = p1.z - p0.z;
V3f vp0;
vp0.x = P.x - p0.x;
vp0.y = P.y - p0.y;
vp0.z = P.z - p0.z;
C = crossProduct(edge0, vp0);
if (dotProduct(N, C) < 0) { return false; }// P is on the right side
// edge 1
V3f edge1;
edge1.x = p2.x - p1.x;
edge1.y = p2.y - p1.y;
edge1.z = p2.z - p1.z;
V3f vp1;
vp1.x = P.x - p1.x;
vp1.y = P.y - p1.y;
vp1.z = P.z - p1.z;
C = crossProduct(edge1, vp1);
if (dotProduct(N, C) < 0) { return false; } // P is on the right side
// edge 2
V3f edge2;
edge2.x = p0.x - p2.x;
edge2.y = p0.y - p2.y;
edge2.z = p0.z - p2.z;
V3f vp2;
vp2.x = P.x - p2.x;
vp2.y = P.y - p2.y;
vp2.z = P.z - p2.z;
C = crossProduct(edge2, vp2);
if (dotProduct(N, C) < 0) { return false; } // P is on the right side;
return true; // this ray hits the triangle
}
My problem is I get t: -52.603783
intersection point P : [-1143.477295, -1053.412842, 49.525799]
This give me, relative to a 640X480 texture, the uv point: [-658, 41].
Probably because my engine use Z pointing up?
My engine use right handed coordinate system, so X pointing forward, Y pointing left and Z pointing up.
You have a slightly incorrect idea of a right handed coordinate system... please check https://en.wikipedia.org/wiki/Cartesian_coordinate_system#In_three_dimensions.
As the name suggests, X is pointing right (right hand's thumb to the right), Y is pointing up (straight index finger) and Z (straight middle finger) is pointing "forward" (actually -Z is forward, and Z is backward in the camera coordinate system).
Actually... your coordinate components are right hand sided, but the interpretation as X is forward etc. is unusual.
If you suspect the problem could be with the coordinate system of your engine (OGRE maybe? plain OpenGL? Or something selfmade?), then you need to transform your point and direction coordinates into the coordinate system of your algorithm. The algorithm you presented works in camera coordinate system, if I am not mistaken. Of course you need to transform the resulting intersection point back to the interpretation you use in the engine.
To turn the direction of a vector component around (e.g. the Z coordinate) you can use multiplication with -1 to achieve the effect.
Edit:
One more thing: I realized that the algorithm uses directional vectors as well, not just points. The rearranging of components does only work for points, not directions, if I recall correctly. Maybe you have to do a matrix multiplication with the CameraView transformation matrix (or its inverse M^-1 or was it the transpose M^T, I am not sure). I can't help you there, I hope you can figure it out or just do trial&error.
My problem is I get t: -52.603783
intersection point P : [-1143.477295, -1053.412842, 49.525799] This give me, relative to a 640X480 texture, the uv point: [-658, 41]
I reckon you think your values are incorrect. Which values do you expect to get for t and UV coordinates? Which ones would be "correct" for your input?
Hope this gets you started. GL, HF with your project! :)
#GUNNM: Concerning your feedback that you do not know how to handle the direction vector, here are some ideas that might be useful to you.
As I said, there should be a matrix multiplication way. Look for key words like "transforming directional vector with a matrix" or "transforming normals (normal vectors) with a matrix". This should yield something like: "use the transpose of the used transformation matrix" or "the inverse of the matrix" or something like that.
A workaround could be: You can "convert" a directional vector to a point, by thinking of a direction as "two points" forming a vector: A starting point and another point which lies in the direction you want to point.
The starting point of your ray, you already have available. Now you need to make sure that your directional vector is interpreted as "second point" not as "directional vector".
If your engine handles a ray like in the first case you would have:
Here is my starting point (0,0,0) and here is my directional vector (5,6,-7) (I made those numbers up and take the origin as starting point to have a simple example). So this is just the usual "start + gaze direction" case.
In the second case you would have:
Here is my start at (0,0,0) and my second point is a point on my directional vector (5,6,-7), e.g. any t*direction. Which for t=1 should give exactly the point where your directional vector is pointing to if it is considered a vector (and the start point being the origin (0,0,0)).
Now you need to check how your algorithm is handling that direction. If it does somewhere ray=startpoint+direction, then it interprets it as point + vector, resulting in a movement shift of the starting point while keeping the orientation and direction of the vector.
If it does ray=startpoint-direction then it interprets it as two points from which a directional vector is formed by subtracting.
To make a directional vector from two points you usually just need to subtract them. This gives a "pure direction" though, without defined orientation (which can be +t or -t). So if you need this direction to be fixed, you may take the absolute of your "vector sliding value" t in later computations for example (may be not the best/fastest way of doing it).

2D moving object collision

I 'm creating a two 2D simulation and I need to determine if 2 moving objects A and B will cross paths .
A moves with a constant speed Va and B with Vb.
I'm able to determine the point where the the object's path intersect
but I can't figure out if will they actually collide.
I calculated the point of collision using
This formula
and the same for y
Let's consider case of two axis-aligned rectangles. They do intersect, if projections of both to X-axis intersect, and projections of both to Y-axis intersect.
First rectangle coordinates (Ax1,Ay1),(Ax2,Ay2), velocity vector (VAx,VAy)
Second rectangle coordinates (Bx1,By1),(Bx2,By2), velocity vector (VBx,VBy)
Time interval when X-projections intersect:
Ax2+VAx*t1=Bx1+VBx*t1
t1=(Bx1-Ax2)/(VAx-VBx)
t2=(Bx2-Ax1)/(VAx-VBx)
Interval is Ix=(t1,t2) (or (t2,t1) if t2 < t1)
For Y-projections
u1=(By1-Ay2)/(VAy-VBy)
u2=(By2-Ay1)/(VAy-VBy)
Interval is Iy=(u1,u2) (or (u2,u1) if u2 < u1)
Check if these two time ranges Ix and Iy intersect. If they do, objects collide.
This is how I have it setup in my code, although it probably won't work to simply add this to your code, hopefully this will help you make sense of the math:
rectangleIntersect() will return true if the two objects have intersected.
public static boolean intersectRange(int min, int max, int min2, int max2){
return Math.max(min, max) >= Math.min(min2, max2) &&
Math.min(min, max) <= Math.max(min2, max2);
}
public static boolean intersectRange(float min, float max, float min2, float max2){
return Math.max(min, max) >= Math.min(min2, max2) &&
Math.min(min, max) <= Math.max(min2, max2);
}
public static boolean rectangleIntersect(Rectangle rect, Rectangle rect2){
return intersectRange(rect.getX(), rect.getX() + rect.getWidth(), rect2.getX(), rect2.getX() + rect2.getWidth()) &&
intersectRange(rect.getY(), rect.getY() + rect.getHeight(), rect2.getY(), rect2.getY() + rect2.getHeight());
}

Calculate signed distance between point and rectangle

I'm trying to write a function in GLSL that returns the signed distance to a rectangle. The rectangle is axis-aligned. I feel a bit stuck; I just can't wrap my head around what I need to do to make it work.
The best I came up with is this:
float sdAxisAlignedRect(vec2 uv, vec2 tl, vec2 br)
{
// signed distances for x and y. these work fine.
float dx = max(tl.x - uv.x, uv.x - br.x);
float dy = max(tl.y - uv.y, uv.y - br.y);
dx = max(0.,dx);
dy = max(0.,dy);
return sqrt(dx*dx+dy*dy);
}
Which produces a rectangle that looks like:
The lines show distance from the rectangle. It works fine but ONLY for distances OUTSIDE the rectangle. Inside the rectangle the distance is a static 0..
How do I also get accurate distances inside the rectangle using a unified formula?
How about this...
float sdAxisAlignedRect(vec2 uv, vec2 tl, vec2 br)
{
vec2 d = max(tl-uv, uv-br);
return length(max(vec2(0.0), d)) + min(0.0, max(d.x, d.y));
}
Here's the result, where green marks a positive distance and red negative (code below):
Breakdown:
Get the signed distance from x and y borders. u - left and right - u are the two x axis distances. Taking the maximum of these values gives the signed distance to the closest border. Viewing d.x and d.y are shown individually in the images below.
Combine x and y:
If both values are negative, take the maximum (i.e. closest to a border). This is done with min(0.0, max(d.x, d.y)).
If only one value is positive, that's the distance we want.
If both values are positive, the closest point is a corner, in which case we want the length. This can be combined with the above case by taking the length anyway and making sure both values are positive: length(max(vec2(0.0), d)).
These two parts to the equation are mutually exclusive, i.e. only one will produce a non-zero value, and can be summed.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
uv -= 0.5;
uv *= vec2(iResolution.x/iResolution.y,1.0);
uv += 0.5;
float d = sdAxisAlignedRect(uv, vec2(0.3), vec2(0.7));
float m = 1.0 - abs(d)/0.1;
float s = sin(d*400.0) * 0.5 + 0.5;
fragColor = vec4(s*m*(-sign(d)*0.5+0.5),s*m*(sign(d)*0.5+0.5),0,1);
}

How to compute the visible area based on a heightmap?

I have a heightmap. I want to efficiently compute which tiles in it are visible from an eye at any given location and height.
This paper suggests that heightmaps outperform turning the terrain into some kind of mesh, but they sample the grid using Bresenhams.
If I were to adopt that, I'd have to do a line-of-sight Bresenham's line for each and every tile on the map. It occurs to me that it ought to be possible to reuse most of the calculations and compute the heightmap in a single pass if you fill outwards away from the eye - a scanline fill kind of approach perhaps?
But the logic escapes me. What would the logic be?
Here is a heightmap with a the visibility from a particular vantagepoint (green cube) ("viewshed" as in "watershed"?) painted over it:
Here is the O(n) sweep that I came up with; I seems the same as that given in the paper in the answer below How to compute the visible area based on a heightmap? Franklin and Ray's method, only in this case I am walking from eye outwards instead of walking the perimeter doing a bresenhams towards the centre; to my mind, my approach would have much better caching behaviour - i.e. be faster - and use less memory since it doesn't have to track the vector for each tile, only remember a scanline's worth:
typedef std::vector<float> visbuf_t;
inline void map::_visibility_scan(const visbuf_t& in,visbuf_t& out,const vec_t& eye,int start_x,int stop_x,int y,int prev_y) {
const int xdir = (start_x < stop_x)? 1: -1;
for(int x=start_x; x!=stop_x; x+=xdir) {
const int x_diff = abs(eye.x-x), y_diff = abs(eye.z-y);
const bool horiz = (x_diff >= y_diff);
const int x_step = horiz? 1: x_diff/y_diff;
const int in_x = x-x_step*xdir; // where in the in buffer would we get the inner value?
const float outer_d = vec2_t(x,y).distance(vec2_t(eye.x,eye.z));
const float inner_d = vec2_t(in_x,horiz? y: prev_y).distance(vec2_t(eye.x,eye.z));
const float inner = (horiz? out: in).at(in_x)*(outer_d/inner_d); // get the inner value, scaling by distance
const float outer = height_at(x,y)-eye.y; // height we are at right now in the map, eye-relative
if(inner <= outer) {
out.at(x) = outer;
vis.at(y*width+x) = VISIBLE;
} else {
out.at(x) = inner;
vis.at(y*width+x) = NOT_VISIBLE;
}
}
}
void map::visibility_add(const vec_t& eye) {
const float BASE = -10000; // represents a downward vector that would always be visible
visbuf_t scan_0, scan_out, scan_in;
scan_0.resize(width);
vis[eye.z*width+eye.x-1] = vis[eye.z*width+eye.x] = vis[eye.z*width+eye.x+1] = VISIBLE;
scan_0.at(eye.x) = BASE;
scan_0.at(eye.x-1) = BASE;
scan_0.at(eye.x+1) = BASE;
_visibility_scan(scan_0,scan_0,eye,eye.x+2,width,eye.z,eye.z);
_visibility_scan(scan_0,scan_0,eye,eye.x-2,-1,eye.z,eye.z);
scan_out = scan_0;
for(int y=eye.z+1; y<height; y++) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y-1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y-1);
}
scan_out = scan_0;
for(int y=eye.z-1; y>=0; y--) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y+1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y+1);
}
}
Is it a valid approach?
it is using centre-points rather than looking at the slope between the 'inner' pixel and its neighbour on the side that the LoS passes
could the trig in to scale the vectors and such be replaced by factor multiplication?
it could use an array of bytes since the heights are themselves bytes
its not a radial sweep, its doing a whole scanline at a time but away from the point; it only uses only a couple of scanlines-worth of additional memory which is neat
if it works, you could imagine that you could distribute it nicely using a radial sweep of blocks; you have to compute the centre-most tile first, but then you can distribute all immediately adjacent tiles from that (they just need to be given the edge-most intermediate values) and then in turn more and more parallelism.
So how to most efficiently calculate this viewshed?
What you want is called a sweep algorithm. Basically you cast rays (Bresenham's) to each of the perimeter cells, but keep track of the horizon as you go and mark any cells you pass on the way as being visible or invisible (and update the ray's horizon if visible). This gets you down from the O(n^3) of the naive approach (testing each cell of an nxn DEM individually) to O(n^2).
More detailed description of the algorithm in section 5.1 of this paper (which you might also find interesting for other reasons if you aspire to work with really enormous heightmaps).

Resources