I am debugging an algorithm I wrote that converts a complex, self-intersecting polygon into a simple polygon by means of identifying all intersection points, and walking around the outside perimeter.
I wrote a series of random data generating stress tests, and in this one, I encountered an interesting situation that caused my algorithm to fail, after working correctly thousands upon thousands of times.
double fRand(double fMin, double fMax)
{
double f = (double)rand() / RAND_MAX; // On my machine RAND_MAX = 2147483647
return fMin + f * (fMax - fMin);
}
// ... testing code below:
srand(1);
for(int j=3;j<5000;j+=5) {
std::vector<E_Point> geometry;
for(int i=0;i<j;i++) {
double radius = fRand(0.6,1.0);
double angle = fRand(0,2*3.1415926535);
E_Point pt(cos(angle),sin(angle));
pt *= radius;
geometry.push_back(pt); // sending in a pile of shit
}
// run algorithm on this geometry
That was an overview of how this is relevant and how I got to where I am now. There are a lot more details I'm leaving out.
What I have been able to do is narrow the issue down to the segment-segment intersection code i am using:
bool intersect(const E_Point& a0, const E_Point& a1,
const E_Point& b0, const E_Point& b1,
E_Point& intersectionPoint) {
if (a0 == b0 || a0 == b1 || a1 == b0 || a1 == b1) return false;
double x1 = a0.x; double y1 = a0.y;
double x2 = a1.x; double y2 = a1.y;
double x3 = b0.x; double y3 = b0.y;
double x4 = b1.x; double y4 = b1.y;
//AABB early exit
if (b2Max(x1,x2) < b2Min(x3,x4) || b2Max(x3,x4) < b2Min(x1,x2) ) return false;
if (b2Max(y1,y2) < b2Min(y3,y4) || b2Max(y3,y4) < b2Min(y1,y2) ) return false;
float ua = ((x4 - x3) * (y1 - y3) - (y4 - y3) * (x1 - x3));
float ub = ((x2 - x1) * (y1 - y3) - (y2 - y1) * (x1 - x3));
float denom = (y4 - y3) * (x2 - x1) - (x4 - x3) * (y2 - y1);
// check against epsilon (lowest normalized double value)
if (fabs(denom) < DBL_EPSILON) {
//Lines are too close to parallel to call
return false;
}
ua /= denom;
ub /= denom;
if ((0 < ua) && (ua < 1) && (0 < ub) && (ub < 1)) {
intersectionPoint.x = (x1 + ua * (x2 - x1));
intersectionPoint.y = (y1 + ua * (y2 - y1));
return true;
}
return false;
}
What's happening is that I have two intersections that this function returns the exact same intersection point value for. Here is a very zoomed in view of the relevant geometry:
The vertical line is defined by the points
(0.3871953044519425, -0.91857980824611341), (0.36139704793723609, 0.91605957361605106)
the green nearly-horizontal line by the points (0.8208980020500205, 0.52853407296583088), (0.36178501611208552, 0.88880385168617226)
and the white line by (0.36178501611208552, 0.88880385168617226), (-0.43211245441046209, 0.68034202227710472)
As you can see the last two lines do share a common point.
My function gives me a solution of (0.36178033094571277, 0.88880245640159794)
for BOTH of these intersections (one of which you see in the picture as a red dot).
The reason why this is a big problem, is that my perimeter algorithm depends on sorting the intersection points on each edge. Since both of these intersection points were calculated to have the same exact value, the sort put them in the wrong orientation. The path of the perimeter comes from above, and followed the white line left, rather than the green line left, meaning that I was no longer following the outside perimeter of my polygon shape.
To fix this problem, there are probably a lot of things I can do but I do not want to search through all of the intersection lists to check against other points to see if the positions compare equal. A better solution would be to try to increase the accuracy of my intersection function.
So what I'm asking is, why is the solution point so inaccurate? Is it because one of the lines is almost vertical? Should I be performing some kind of transformation first? In both cases the almost-vertical line was passed as a0 and a1.
Update: Hey, look at this:
TEST(intersection_precision_test) {
E_Point problem[] = {
{0.3871953044519425, -0.91857980824611341}, // 1559
{0.36139704793723609, 0.91605957361605106}, // 1560
{-0.8208980020500205, 0.52853407296583088}, // 1798
{0.36178501611208552, 0.88880385168617226}, // 1799
{-0.43211245441046209, 0.6803420222771047} // 1800
};
std::cout.precision(16);
E_Point result;
intersect(problem[0],problem[1],problem[2],problem[3],result);
std::cout << "1: " << result << std::endl;
intersect(problem[0],problem[1],problem[3],problem[2],result);
std::cout << "2: " << result << std::endl;
intersect(problem[1],problem[0],problem[2],problem[3],result);
std::cout << "3: " << result << std::endl;
intersect(problem[1],problem[0],problem[3],problem[2],result);
std::cout << "4: " << result << std::endl;
intersect(problem[2],problem[3],problem[0],problem[1],result);
std::cout << "rev: " << result << std::endl;
intersect(problem[3],problem[2],problem[0],problem[1],result);
std::cout << "revf1: " << result << std::endl;
intersect(problem[2],problem[3],problem[1],problem[0],result);
std::cout << "revf2: " << result << std::endl;
intersect(problem[3],problem[2],problem[1],problem[0],result);
std::cout << "revfboth: " << result << std::endl;
}
Output:
Starting Test intersection_precision_test, at Polygon.cpp:1830
1: <0.3617803309457128,0.8888024564015979>
2: <0.3617803309457128,0.8888024564015979>
3: <0.3617803314022162,0.8888024239374175>
4: <0.3617803314022162,0.8888024239374175>
rev: <0.3617803635476076,0.8888024344185281>
revf1: <0.3617803313928456,0.8888024246235207>
revf2: <0.3617803635476076,0.8888024344185281>
revfboth: <0.3617803313928456,0.8888024246235207>
Am I actually running out of mantissa bits or can I do significantly better with a smarter algorithm?
The problem here is that there's no easy way for me to determine when a vertex is set really really close to another line. I wouldn't mind moving it or even completely nuking it because neither of those will screw up my sort!
If you change your float intermediates (ua, ub, denom) to doubles and print the ua values (after division), you'll get these:
0x1.f864ab6b36458p-1 in the first case
0x1.f864af01f2037p-1 in the second case
I've printed them in hex to make it easy to look at the bits. These two values agree in the first 22 bits (1.f864a plus the high bit of b and f). A float only has 23 bits of significand! It's no surprise that if you compute your intermediates in floats, they get rounded to the same answer.
In this case, you can perhaps work around the problem by computing your intermediates using doubles instead of floats. (I made my point struct use doubles for x and y. If you're using floats, I don't know if computing the intermediates in doubles will help.)
However, a case where the vertical segment passes even closer to the intersection of the two horizontal segments might still require even more precision than a double can provide.
What would you do if the vertical segment passed exactly through the shared endpoint of the horizontal segments? I assume you handle this case correctly.
If you look at the ub values (after division), computed with doubles, you get these:
0.9999960389052315
5.904388076838819e-06
This means that the intersection is very, very close to the shared endpoint of the horizontal segments.
So here's what I think you can do. Each time you compute an intersection, look at ub. If it's sufficiently close to 1, move the endpoint to be the intersection point. Then treat the intersection as if it had been an exact pass-through case in the first place. You don't really have to change the data, but you have to treat the endpoint as moved for both of the segments that share the endpoint, which means the current intersection test and the test on the next line segment.
Related
I am writing a program about computing geometry.
In this program, I need to identify unit vectors. (The word identify maybe not accurate)
i.e., writing a program to check whether a unit vector already exists.
This procedure is used when checking whether two polygons are on one plane. The first step is to check whether normal of two polygons are very close (angle < 1.0 degree).
So, we can assume that
all vectors are unit vectors
vectors are random
For example, set the angle threshold to 1.0 degree. And we have 6 vectors.
(1,0,0)
(0,1,0)
(1,0,1e-8) // in program, this will be normalized
(1,0,0)
(sin(45), cos(45),0)
(sin(44.9), cos(44.9),0)
then, the index of each vector is
0 1 0 0 2 2
i.e., the 1st / 3rd / 4th vectors are the same one because their angle is within 1.0 degree or just the same direction. angle between the 5th/6th vector is smaller than 1.0 degree.
Now, the problem comes, I have hundreds of thousands unit vectors to identify in different stages. This procedure costs about half of total time.
example code
std::vector<Vector3d> unitVecs; // all unit vectors
// more than 100,000 unit vectors in real case
int getVectorID(const Vector3d& vec)
{
for(int i=0; i<unitVecs.size(); ++i) {
if(calcAngle(unitVecs[i], vec) <1.0) // 1.0 is angle degree threshold
return i;
/// alternatively, check with cos value
if(unitVecs[i].dot(vec)>cos(1.0*RADIAN))
return i;
}
return -1;
}
int insertVector(const Vector3d& vec)
{
int idx = getVectorID(vec);
if(idx!=-1) return idx;
unitVecs.push_back(vec);
return unitVecs.size()-1;
}
Does anyone have good ideas to accelerate this process ?
If you are able to accept vectors which are merely "very close to being unit vectors", as opposed to vectors which are strictly less than or equal to 1 degree from being a unit vector, you can simply check that for a given vector 3 values are very close to 0, and one value is very close to 1:
int valueCloseTo(float value, float trg, float epsilon=0.0001) {
return abs(value) - trg <= epsilon;
}
int isRoughlyUnitVector(float x, float y, float z, float epsilon=0.0001) {
// We can quickly return false if units don't add near 1
// Could also consider multiplying `epsilon` x 3 here to account for accumulated error
if (!valueCloseTo(x + y + z, 1, epsilon)) return false;
// Now ensure that of x, y, and z, two are ~0 and one is ~1
int numZero = 0;
int numOne = 0;
std::vector<float> vec{ x, y, z };
for (float v : vec) {
// Count another ~0 value
if (valueCloseTo(v, 0, epsilon)) numZero++;
// Count another ~1 value
else if (valueCloseTo(v, 1, epsilon)) numOne++;
// If any value isn't close to 0 or 1, (x,y,z) is not a unit vector
else return false;
// False if we exceed 2 values near 0, and one value near 1
if (numZero > 2 || numOne > 1) return false;
}
return true;
}
Note that this method does not give any way to define a "maximum offset angle" (like 1deg in your question) - instead it lets us work with an epsilon value, which isn't an angle but rather a simple linear value. As epsilon increases vectors that are further from being unit vectors get accepted, but epsilon doesn't have an "angular" nature to it.
Can anyone suggest how to handle the following case for the SPOJ-Tourist problem( http://www.spoj.com/problems/TOURIST/ )?
The case is:
5 5
.****
*###*
*.*.*
.####
.*.*.
My implementation is similar to this sample code( mentioned in How to solve this by recursion? )
const int N 100+5
vector<string> board;
int dp[N][N][N]; // initialize with -1
int calc(int x1, int y1, int x2) {
int n=board.size(), m=board[0].size();
int y2=x1+y1-x2;
if(x1>=n || x2>=n || y1>=m || y2>=m) return 0; // out of range
if(board[x1][y1]=='#' || board[x2][y2]=='#') return 0; // path blocked so its an invalid move
if(dp[x1][y1][x2]!=-1) return dp[x1][y1][x2]; // avoid recalculation
int res=0;
if(board[x1][y1]=='*') res++;
if(board[x2][y2]=='*') res++;
if(board[x1][y1]=='*' && x1==x2 && y1==y2) res=1; // both tourist on same spot
int r=calc(x1+1, y1, x2); // first tourist down second tourist right
r=max(r, calc(x1+1, y1, x2+1)); // first tourist down second tourist down
r=max(r, calc(x1, y1+1, x2)); // first tourist right second tourist right
r=max(r, calc(x1, y1+1, x2+1)); // first tourist right second tourist down
res+=r;
dp[x1][y1][x2]=res; // memoize
return res;
}
But I guess this won't work for the above case.
The ans should be 4 but it is giving 8 as output.
Your call this:
calc(x1+1, y1, x2+1)
seems to be culprit. Because both your +1 to x1 & x2 will cancel out & y2 will be same as previous y2. Your condition
if(dp[x1][y1][x2]!=-1) return dp[x1][y1][x2];
prevents infinite loop but results in counting value of each position again.
One simple solution that came is: instead of sending (x1+1, y1, x2+1) etc, send (x1,y1,x2) and one control information say 0(1R-2R),1(1R-2D),2(1D-2R),3(1D-2D). And then based on control information, calculate new pair of (x1, y1) & (x2,y2)
I have a robotic arm composed of 2 servo motors. I am trying to calculate inverse kinematics such that the arm is positioned in the middle of a canvas and can move to all possible points in both directions (left and right). This is an image of the system Image. The first servo moves 0-180 (Anti-clockwise). The second servo moves 0-180 (clockwise).
Here is my code:
int L1 = 170;
int L2 = 230;
Vector shoulderV;
Vector targetV;
shoulderV = new Vector(0,0);
targetV = new Vector(0,400);
Vector difference = Vector.Subtract(targetV, shoulderV);
double L3 = difference.Length;
if (L3 > 400) { L3 = 400; }
if (L3 < 170) { L3 = 170; }
// a + b is the equivelant of the shoulder angle
double a = Math.Acos((L1 * L1 + L3 * L3 - L2 * L2) / (2 * L1 * L3));
double b = Math.Atan(difference.Y / difference.X);
// S1 is the shoulder angle
double S1 = a + b;
// S2 is the elbow angle
double S2 = Math.Acos((L1 * L1 + L2 * L2 - L3 * L3) / (2 * L1 * L2));
int shoulderAngle = Convert.ToInt16(Math.Round(S1 * 180 / Math.PI));
if (shoulderAngle < 0) { shoulderAngle = 180 - shoulderAngle; }
if (shoulderAngle > 180) { shoulderAngle = 180; }
int elbowAngle = Convert.ToInt16(Math.Round(S2 * 180 / Math.PI));
elbowAngle = 180 - elbowAngle;
Initially, when the system is first started, the arm is straightened with shoulder=90, elbow =0.
When I give positive x values I get correct results in the left side of the canvas. However, I want the arm to move in the right side as well. I do not get correct values when I enter negatives. What am I doing wrong? Do I need an extra servo to reach points in the right side?
Sorry if the explanation is not good. English is not my first language.
I suspect that you are losing a sign when you are using Math.Atan(). I don't know what programming language or environment this is, but try and see if you have something like this:
Instead of this line:
double b = Math.Atan(difference.Y / difference.X);
Use something like this:
double b = Math.Atan2(difference.Y, difference.X);
When difference.Y and difference.X have the same sign, dividing them results in a positive value. That prevents you from differentiating between the cases when they are both positive and both negative. In that case, you cannot differentiate between 30 and 210 degrees, for example.
When I try to sum up N previous frames stored in a list and then dividing by num frames, the background model produced is not as expected. I can tell because I've tried the algo in Matlab earlier on the same video.
class TemporalMeanFilter {
private:
int learningCount;
list<Mat> history;
int height, width;
Mat buildModel(){
if(history.size() == 0)
return Mat();
Mat image_avg(height, width, CV_8U, Scalar(0));
double alpha = (1.0/history.size());
list<Mat>::iterator it = history.begin();
cout << "History size: " << history.size() << " Weight per cell: " << alpha << endl;
while(it != history.end()){
image_avg += (*it * alpha);
it++;
}
return image_avg;
}
public:
TemporalMeanFilter(int height, int width, int learningCount){
this->learningCount = learningCount;
this->height = height;
this->width = width;
}
void applyFrameDifference(Mat& frame_diff, Mat& bg_model, Mat& input_frame){
if(history.size() == learningCount)
history.pop_front();
history.push_back(input_frame);
bg_model = buildModel();
frame_diff = bg_model - input_frame;
}
};
//The main looks like this
// ... reading video from file
TemporalMeanFilter meanFilter(height, width, 50); //background subtraction algorithm
meanFilter.applyFrameDifference(diff_frame, bg_model, curr_frame);
//... displaying on screen ... prog ends
Image:
http://imagegur.com/images/untitled.png
The left one is the bg_model, the middle is the curr_frame, and the right one is the output.
Maybe it's because of the rounding off done on CV_U8? I tried changing to CV_32FC1, but then the program just crashed because for some reason it couldn't add two CV_32FC1 Matrices.
Any insight would be greatly appreciated. Thanks!
More Info:
Inside the class, I now keep the average in a CV_16UC1 Mat to prevent clipping, how it results in an error after successive addition.
The add function / operator + both change the type of result from CV_16UC1 to CV8UC1. This error is caused by that. Any suggestion how to ask it preserve the original datatype? (PS: I asked politely... didn't work)
background_model += *it;
OpenCV Error: Bad argument (When the input arrays in add/subtract/multiply/divid
e functions have different types, the output array type must be explicitly speci
fied) in unknown function, file C:\buildslave64\win64_amdocl\2_4_PackSlave-win32
-vc11-shared\opencv\modules\core\src\arithm.cpp, line 1313
You're right that it's almost certainly the rounding errors you get by accumulating scaled greyscale values. There's no reason why it should crash using floating point pixels though, so you should try something like this:
Mat buildModel()
{
if (history.size() == 0)
return Mat();
Mat image_avg = Mat::zeros(height, width, CV_32FC1);
double alpha = (1.0 / history.size());
list<Mat>::iterator it = history.begin();
while (it != history.end())
{
Mat image_temp;
(*it).convertTo(image_temp, CV_32FC1);
image_avg += image_temp;
it++;
}
image_avg *= alpha;
return image_avg;
}
Depending on what you want to do with the result, you may need to normalize it or rescale it or convert back to greyscale before display etc.
Given a line made up of several points, how do I make the line smoother/ curvier/ softer through adding intermediate points -- while keeping the original points completely intact and unmoved?
To illustrate, I want to go from the above to the below in this illustration:
Note how in the above picture, if we start at the bottom there will be a sharper right turn. In the bottom image however, this sharp right turn is made a bit "softer" by adding an intermediate point which is positioned in the middle of the two points, and using averages of the angles of the other lines. (Differently put, imagine the lines a race car would drive, as it couldn't abruptly change direction.) Note how, however, none of the original points was "touched", I just added more points.
Thanks!! For what it's worth, I'm implementing this using JavaScript and Canvas.
with each edge (e1 & e2) adjacent to each 'middle' edge (me) do
let X = 0.5 x length(me)
find 2 cubic bezier control points by extending the adjacent edges by X (see algorithm below)
get midpoint of cubic bezier (by applying formula below)
insert new 'midpoint' between me's two coordinates.
FloatPoint ExtendLine(const FloatPoint A, const FloatPoint B, single distance)
{
FloatPoint newB;
float lenAB = sqrt((A.x - B.x) * (A.x - B.x) + (A.y - B.y) * (A.y - B.y));
newB.X = B.x - (B.x - A.x) / lenAB * distance;
newB.Y = B.Y - (B.Y - A.Y) / lenAB * distance;
return newB;
}
Edit: Formula for Bezier Curve midpoint: p(0.5) = 0.125(p0) + 0.375(p1) + 0.375(p2) + 0.125(p3)
The following code found elsewhere here does the job for me, in the specific context of JavaScript-Canvas which I'm using -- but please see Angus' answer for a more general approach:
var max = points.length;
context.beginPath();
var i = 0;
context.moveTo(points[i].x, points[i].y);
for (i = 1; i < max - 2; i++) {
var xc = (points[i].x + points[i + 1].x) * .5;
var yc = (points[i].y + points[i + 1].y) * .5;
context.quadraticCurveTo(points[i].x, points[i].y, xc, yc);
}
context.quadraticCurveTo(points[max - 2].x, points[max - 2].y, points[max - 1].x,points[max - 1].y);
context.closePath();
context.stroke();