calculate bisector segment coordinates - geometry

I think there is a pretty straightForward answer to this, but i cant find it. My geometry lessons are too far away for this. the problem is:
Given 2 points A and B (coordinates Ax Ay Bx and By), I want to find the coordinates of points C and D so that [AB] and [CD] segments intersect at their center and [CD] has a length of d (a variable).
I want to find the equation giving me Cx,Cy,Dx and Dy from Ax,Ay,Bx,By and d.
Here is a little schema of the problem:
and an image of the intended result:
I already know how to find the center point of [AB] (Ax+Bx/2, Ay+By/2), how to find the slope of the [AB] segment (By-Ay/Bx-Ax) and then one of the [CD] segment (Ax-Bx/By-Ay). But then i get stuck on how to get my two points. I thought i could calculate the angle from the slope, then use it with some trigonometry to get the coordinates but it sounds like a quite heavy, ugly and unnecessary calculation...
It feels so close, but i still cant get it.
I also found this post, which is almost perfect, but the length cannot be defined: it must be the same as the first segment.
I dont think this is language-dependent, but if you must know, i'm doing a mini prototype on processing and will probably get it on javascript later.
Thanks for any help.

The basic trick here is that, in 2d, the perpendicular to a vector (x, y) is merely ± (-y, x). (One gets this by computing the cross product with the (0, 0, 1) vector in 3d, and projecting to 2d.) So what you need to do is:
Get the midpoint between A and B (you have done that).
Get the vector from A to B, which is B - A = (x, y) = (bx - ax, by - ay).
Get the perpendicular vector: (-y, x).
Normalize it. Let length = sqrt(y*y + x*x), then norm = (-y/length, x/length).
Multiply the normalized perpendicular by your desired distance ± d/2 (since you want the distance between C and D to be d), and add to the center point.
No slopes or trig functions are required.

Related

corners of angled rect in 3d

Ive got 2 points in 3d space (with the same y coordinate). Ill call them c and m. I want to find the corner points (marked in the pic as p1-p4) of a square with the width w. The important thing is, that the square is not parallel to the x-axis. If it were, (for p1 as an example) I could just do:
p1.x = m.x + w / 2
p1.y = m.y + w / 2
p1.z = m.z
How would I do the same with a angled square? These are all the given points:
m; c
and lenghts:
w; d
There's multiple ways to do it, but here's one way.
If the two points are guaranteed to have the same y value, you should be able to do it as follows.
Take 'm - c' and call that u. Normalize u. Then take the cross product of u and the y axis to get v, a vector parallel to the xz plane that's perpendicular to u. (This can be optimized, but that's unlikely to be important.) Then take the cross product of u and v to get a third vector, w. Note that you can use 'm - c' or 'c - m', or use different orders for the cross-product arguments, and it'll still work, but the resulting vectors may point in different directions (but only opposite directions). You can also normalize at different points in the process and get the same results at the end.
Once you have m, v, and w, you can use some basic vector math to compute the corners.
[Edit: I see you have a variable named 'w', so I should clarify that the 'w' in my example is a different 'w' than yours. As for your 'w' and 'd', those would factor in in the vector math I mentioned at the end.]

Perspective Projection: Proving that 1/z is Linear?

In 3D rendering (or geometry for that matter), in the rasterization algorithm, when you project the vertices of a triangle onto the screen and then find if a pixel overlaps the 2D triangle, you often need to find the depth or the z-coordinate of the triangle that the pixel overlaps. Generally, the method consists of computing the barycentric coordinates of the pixel in the 2D "projected" image of the triangle, and then use these coordinates to interpolate the triangle original vertices z-coordinates (before the vertices got projected).
Now it's written in all text books that you can't interpolate the vertices coordinates of the vertices directly but that you need to do this instead:
(sorry can't get Latex to work?)
1/z = w0 * 1/v0.z + w1 * 1/v1.z + w2 * 1/v2.z
Where w0, w1, and w2 are the barycentric coordinates of the "pixel" on the triangle.
Now, what I am looking after, are two things:
what would be the formal proof to show that interpolating z doesn't work?
what would be the formal proof to show that 1/z does the right thing?
To show this is not home work ;-) and that I have made some work on my own, I have found the following explanation for question 2.
Basically a triangle can be defined by a plane equation. Thus you can write:
Ax + By + Cz = D.
Then you isolate z to get z = (D - Ax - By)/C
Then you divide this formula by z as you would with a perspective divide and if you develop, regroup, etc. you get:
1/z = C/D + A/Dx/z + B/Dy/z.
Then we name C'=C/D B'=B/D and A'=A/D you get:
1/z = A'x/z + B'y/z + C'
It says that x/z and y/z are just the coordinates of the points on the triangles once projected on the screen and that the equation on the right is an "affine" function therefore 1/z is a linear function???
That doesn't seem like a demonstration to me? Or maybe it's the right idea, but can't really say how you can tell by just looking at the equation that this is an affine function. If you multiply all the terms you just get:
A'x + B'y + C'z = 1.
Which is just basically our original equations (just need to replace A' B' and C' with the proper term).
Not sure what you are trying to ask here, but if you look at:
1/z = A'x/z + B'y/z + C'
and rewrite it as:
1/z = A'u + B'v + C'
where (u,v) are screen coordinates of the triangle after perspective projection, you can see that the depth (z) of a point on the triangle is not linearly related to (u,v) but 1/depth is and that is what the textbooks are trying to teach you.

Finding the original position of a point on an image after rotation

I have the x, y co-ordinates of a point on a rotated image by certain angle. I want to find the co-ordinates of the same point in the original, non-rotated image.
Please check the first image which is simpler:
UPDATED image, SIMPLIFIED:
OLD image:
Let's say the first point is A, the second is B and the last is C. I assume you have the rotation matrice R (see Wikipedia Rotation Matrix if not) et the translation vector t, so that B = R*A and C = B+t.
It comes C = R*A + t, and so A = R^1*(C-t).
Edit: If you only need the non rotated new point, simply do D = R^-1*C.
First thing to do is defining the reference system (how "where the points lies with respect to each image" will be translated into numbers). I guess that you want to rely on a basic 2D reference system, given by a single point (a couple of X/Y values). For example: left/lower corner (min. X and min. Y).
The algorithm is pretty straightforward:
Getting the new defining reference point associated with the
rotated shape (min. X and min. Y), that is, determining RefX_new and
RefY_new.
Applying a basic conversion between reference systems:
X_old = X_new + (RefX_new - RefX_old)
Y_old = Y_new + (RefY_new -
RefY_old)
----------------- UPDATE TO RELATE FORMULAE TO NEW CAR PIC
RefX_old = min X value of the CarFrame before being rotated.
RefY_old = max Y value of the CarFrame before being rotated.
RefX_new = min X value of the CarFrame after being rotated.
RefY_new = max Y value of the CarFrame after being rotated.
X_new = X of the point with respect to the CarFrame after being rotated. For example: if RefX_new = 5 with respect to absolute frame (0,0) and X of the point with respect to this absolute frame is 8, X_new would be 3.
Y_new = Y of the point with respect to CarFrame after being rotated (equivalently to point above)
X_old_C = X_new_C(respect to CarFrame) + (RefX_new(CarFrame_C) - RefX_old(CarFrame_A))
Y_old_C = Y_new_C(respect to CarFrame) + (RefY_new(CarFrame_C) - RefY_old(CarFrame_A))
These coordinates are respect to the CarFrame and thus you might have to update them with respect to the absolute frame (0,0, I guess), as explained above, that is:
X_old_D_absolute_frame = X_old_C + (RefX_new(CarFrame_C) + RefX_global(i.e., 0))
Y_old_D_absolute_frame = Y_old_C + (RefY_new(CarFrame_C) + RefY_global(i.e., 0))
(Although you should do that once the CarFrame is in its "definitive position" with respect to the global frame, that is, on picture D (the point has the same coordinates with respect to the CarFrame in both picture C and D, but different ones with respect to the global frame).)
It might seem a bit complex put in this way; but it is really simple. You have just to think carefully about one case and create the algorithm performing all the actions. The idea is extremely simple: if I am on 8 inside something which starts in 5; I am on 3 with respect to the container.
------------ UPDATE IN THE METHODOLOGY
As said in the comment, these last pictures prove that the originally-proposed calculation of reference (max. Y/min. X) is not right: it shouldn't be the max./min. values of the carFrame but the minimum distances to the closer sides (= perpendicular line from the left/bottom side to the point).
------------ TRIGONOMETRIC CALCS FOR THE SPECIFIC EXAMPLE
The algorithm proposed is the one you should apply in any situation. Although in this specific case, the most difficult part is not moving from one reference system to the other, but defining the reference point in the rotated system. Once this is done, the application to the non-rotated case is immediate.
Here you have some calcs to perform this action (I have done it pretty quickly, thus better take it as an orientation and do it by your own); also I have only considered the case in the pictures, that is, rotation over the left/bottom point:
X_rotated = dx * Cos(alpha)
where dx = X_orig - (max_Y_CarFrame - Y_Orig) * Tan(alpha)
Y_rotated = dy * Cos(alpha)
where dy = Y_orig - X_orig * Tan(alpha)
NOTE: (max_Y_CarFrame - Y_Orig) in dx and X_orig in dy expect that the basic reference system is 0,0 (min. X and min. Y). If this is not the case, you would have to change this variables.
The X_rotated and Y_rotated give the perpendicular distance from the point to the closest side of the carFrame (respectively, left and bottom side). By applying these formulae (I insist: analyse them carefully), you get the X_old_D_absolute_frame/Y_old_D_absolute_frame that is, you have just to add the lef/bottom values from the carFrame (if it is located in 0,0, these would be the final values).

Finding most distant point in circle from point

I'm trying to find the best way to get the most distant point of a circle from a specified point in 2D space. What I have found so far, is how to get the distance between the point and the circle position, but I'm not entirely sure how to expand this to find the most distant point of the circle.
The known variables are:
Point a
Point b (circle position)
Radius r (circle radius)
To find the distance between the point and the circle position, I have found this:
xd = x2 - x1
yd = y2 - y1
Distance = SquareRoot(xd * xd + yd * yd)
It seems to me, this is part of the solution. How would this be expanded to get the position of Point x in the below image?
As an additional but optional part of the question: I have read in some places that it would be possible to get the distance portion without using the Square Root, which is very performance intensive and should be avoided if fast code is necessary. In my case, I would be doing this calculation quite often; Any comments on this within the context of the main question would be welcome too.
What about this?
Calculate A-B.
We now have a vector pointing from the center of the circle towards A (if B is the origin, skip this and just consider point A a vector).
Normalize.
Now we have a well defined length (the length is 1)
If the circle is not of unit radius, multiply by radius. If it is unit radius, skip this.
Now we have the correct length.
Invert sign (can be done in one step with 3., just multiply with the negative radius)
Now our vector points in the correct direction.
Add B (if B is the origin, skip this).
Now our vector is offset correctly so its endpoint is the point we want.
(Alternatively, you could calculate B-A to save the negation, but then you have to do one more operation to offset the origin correctly.)
By the way, it works the same in 3D, except the circle would be a sphere, and the vectors would have 3 components (or 4, if you use homogenous coords, in this case remember -- for correctness -- setting w to 0 when "turning points into vectors" and to 1 at the end when making a point from the vector).
EDIT:
(in reply of pseudocode)
Assuming you have a vec2 class which is a struct of two float numbers with operators for vector subtraction and scalar multiplicaion (pretty trivial, around a dozen lines of code) and a function normalize which needs to be no more than a shorthand for multiplying with inv_sqrt(x*x+y*y), the pseudocode (my pseudocode here is something like a C++/GLSL mix) could look something like this:
vec2 most_distant_on_circle(vec2 const& B, float r, vec2 const& A)
{
vec2 P(A - B);
normalize(P);
return -r * P + B;
}
Most math libraries that you'd use should have all of these functions and types built-in. HLSL and GLSL have them as first type primitives and intrinsic functions. Some GPUs even have a dedicated normalize instruction.

Projective transformation

Given two image buffers (assume it's an array of ints of size width * height, with each element a color value), how can I map an area defined by a quadrilateral from one image buffer into the other (always square) image buffer? I'm led to understand this is called "projective transformation".
I'm also looking for a general (not language- or library-specific) way of doing this, such that it could be reasonably applied in any language without relying on "magic function X that does all the work for me".
An example: I've written a short program in Java using the Processing library (processing.org) that captures video from a camera. During an initial "calibrating" step, the captured video is output directly into a window. The user then clicks on four points to define an area of the video that will be transformed, then mapped into the square window during subsequent operation of the program. If the user were to click on the four points defining the corners of a door visible at an angle in the camera's output, then this transformation would cause the subsequent video to map the transformed image of the door to the entire area of the window, albeit somewhat distorted.
Using linear algebra is much easier than all that geometry! Plus you won't need to use sine, cosine, etc, so you can store each number as a rational fraction and get the exact numerical result if you need it.
What you want is a mapping from your old (x,y) co-ordinates to your new (x',y') co-ordinates. You can do it with matrices. You need to find the 2-by-4 projection matrix P such that P times the old coordinates equals the new co-ordinates. We'll assume that you're mapping lines to lines (not, for instance, straight lines to parabolas). Because you have a projection (parallel lines don't stay parallel) and translation (sliding), you need a factor of (xy) and (1), too. Drawn as matrices:
[x ]
[a b c d]*[y ] = [x']
[e f g h] [x*y] [y']
[1 ]
You need to know a through h so solve these equations:
a*x_0 + b*y_0 + c*x_0*y_0 + d = i_0
a*x_1 + b*y_1 + c*x_1*y_1 + d = i_1
a*x_2 + b*y_2 + c*x_2*y_2 + d = i_2
a*x_3 + b*y_3 + c*x_3*y_3 + d = i_3
e*x_0 + f*y_0 + g*x_0*y_0 + h = j_0
e*x_1 + f*y_1 + g*x_1*y_1 + h = j_1
e*x_2 + f*y_2 + g*x_2*y_2 + h = j_2
e*x_3 + f*y_3 + g*x_3*y_3 + h = j_3
Again, you can use linear algebra:
[x_0 y_0 x_0*y_0 1] [a e] [i_0 j_0]
[x_1 y_1 x_1*y_1 1] * [b f] = [i_1 j_1]
[x_2 y_2 x_2*y_2 1] [c g] [i_2 j_2]
[x_3 y_3 x_3*y_3 1] [d h] [i_3 j_3]
Plug in your corners for x_n,y_n,i_n,j_n. (Corners work best because they are far apart to decrease the error if you're picking the points from, say, user-clicks.) Take the inverse of the 4x4 matrix and multiply it by the right side of the equation. The transpose of that matrix is P. You should be able to find functions to compute a matrix inverse and multiply online.
Where you'll probably have bugs:
When computing, remember to check for division by zero. That's a sign that your matrix is not invertible. That might happen if you try to map one (x,y) co-ordinate to two different points.
If you write your own matrix math, remember that matrices are usually specified row,column (vertical,horizontal) and screen graphics are x,y (horizontal,vertical). You're bound to get something wrong the first time.
EDIT
The assumption below of the invariance of angle ratios is incorrect. Projective transformations instead preserve cross-ratios and incidence. A solution then is:
Find the point C' at the intersection of the lines defined by the segments AD and CP.
Find the point B' at the intersection of the lines defined by the segments AD and BP.
Determine the cross-ratio of B'DAC', i.e. r = (BA' * DC') / (DA * B'C').
Construct the projected line F'HEG'. The cross-ratio of these points is equal to r, i.e. r = (F'E * HG') / (HE * F'G').
F'F and G'G will intersect at the projected point Q so equating the cross-ratios and knowing the length of the side of the square you can determine the position of Q with some arithmetic gymnastics.
Hmmmm....I'll take a stab at this one. This solution relies on the assumption that ratios of angles are preserved in the transformation. See the image for guidance (sorry for the poor image quality...it's REALLY late). The algorithm only provides the mapping of a point in the quadrilateral to a point in the square. You would still need to implement dealing with multiple quad points being mapped to the same square point.
Let ABCD be a quadrilateral where A is the top-left vertex, B is the top-right vertex, C is the bottom-right vertex and D is the bottom-left vertex. The pair (xA, yA) represent the x and y coordinates of the vertex A. We are mapping points in this quadrilateral to the square EFGH whose side has length equal to m.
Compute the lengths AD, CD, AC, BD and BC:
AD = sqrt((xA-xD)^2 + (yA-yD)^2)
CD = sqrt((xC-xD)^2 + (yC-yD)^2)
AC = sqrt((xA-xC)^2 + (yA-yC)^2)
BD = sqrt((xB-xD)^2 + (yB-yD)^2)
BC = sqrt((xB-xC)^2 + (yB-yC)^2)
Let thetaD be the angle at the vertex D and thetaC be the angle at the vertex C. Compute these angles using the cosine law:
thetaD = arccos((AD^2 + CD^2 - AC^2) / (2*AD*CD))
thetaC = arccos((BC^2 + CD^2 - BD^2) / (2*BC*CD))
We map each point P in the quadrilateral to a point Q in the square. For each point P in the quadrilateral, do the following:
Find the distance DP:
DP = sqrt((xP-xD)^2 + (yP-yD)^2)
Find the distance CP:
CP = sqrt((xP-xC)^2 + (yP-yC)^2)
Find the angle thetaP1 between CD and DP:
thetaP1 = arccos((DP^2 + CD^2 - CP^2) / (2*DP*CD))
Find the angle thetaP2 between CD and CP:
thetaP2 = arccos((CP^2 + CD^2 - DP^2) / (2*CP*CD))
The ratio of thetaP1 to thetaD should be the ratio of thetaQ1 to 90. Therefore, calculate thetaQ1:
thetaQ1 = thetaP1 * 90 / thetaD
Similarly, calculate thetaQ2:
thetaQ2 = thetaP2 * 90 / thetaC
Find the distance HQ:
HQ = m * sin(thetaQ2) / sin(180-thetaQ1-thetaQ2)
Finally, the x and y position of Q relative to the bottom-left corner of EFGH is:
x = HQ * cos(thetaQ1)
y = HQ * sin(thetaQ1)
You would have to keep track of how many colour values get mapped to each point in the square so that you can calculate an average colour for each of those points.
I think what you're after is a planar homography, have a look at these lecture notes:
http://www.cs.utoronto.ca/~strider/vis-notes/tutHomography04.pdf
If you scroll down to the end you'll see an example of just what you're describing. I expect there's a function in the Intel OpenCV library which will do just this.
There is a C++ project on CodeProject that includes source for projective transformations of bitmaps. The maths are on Wikipedia here. Note that so far as i know, a projective transformation will not map any arbitrary quadrilateral onto another, but will do so for triangles, you may also want to look up skewing transforms.
If this transformation has to look good (as opposed to the way a bitmap looks if you resize it in Paint), you can't just create a formula that maps destination pixels to source pixels. Values in the destination buffer have to be based on a complex averaging of nearby source pixels or else the results will be highly pixelated.
So unless you want to get into some complex coding, use someone else's magic function, as smacl and Ian have suggested.
Here's how would do it in principle:
map the origin of A to the origin of B via a traslation vector t.
take unit vectors of A (1,0) and (0,1) and calculate how they would be mapped onto the unit vectors of B.
this gives you a transformation matrix M so that every vector a in A maps to M a + t
invert the matrix and negate the traslation vector so for every vector b in B you have the inverse mapping b -> M-1 (b - t)
once you have this transformation, for each point in the target area in B, find the corresponding in A and copy.
The advantage of this mapping is that you only calculate the points you need, i.e. you loop on the target points, not the source points. It was a widely used technique in the "demo coding" scene a few years back.

Resources