I've got a shape consisting of four points, A, B, C and D, of which the only their position is known. The goal is to transform these points to have specific angles and offsets relative to each other.
For example: A(-1,-1) B(2,-1) C(1,1) D(-2,1), which should be transformed to a perfect square (all angles 90) with offsets between AB, BC, CD and AD all being 2. The result should be a square slightly rotated counter-clockwise.
What would be the most efficient way to do this?
I'm using this for a simple block simulation program.
As Mark alluded, we can use constrained optimization to find the side 2 square that minimizes the square of the distance to the corners of the original.
We need to minimize f = (a-A)^2 + (b-B)^2 + (c-C)^2 + (d-D)^2 (where the square is actually a dot product of the vector argument with itself) subject to some constraints.
Following the method of Lagrange multipliers, I chose the following distance constraints:
g1 = (a-b)^2 - 4
g2 = (c-b)^2 - 4
g3 = (d-c)^2 - 4
and the following angle constraints:
g4 = (b-a).(c-b)
g5 = (c-b).(d-c)
A quick napkin sketch should convince you that these constraints are sufficient.
We then want to minimize f subject to the g's all being zero.
The Lagrange function is:
L = f + Sum(i = 1 to 5, li gi)
where the lis are the Lagrange multipliers.
The gradient is non-linear, so we have to take a hessian and use multivariate Newton's method to iterate to a solution.
Here's the solution I got (red) for the data given (black):
This took 5 iterations, after which the L2 norm of the step was 6.5106e-9.
While Codie CodeMonkey's solution is a perfectly valid one (and a great use case for the Lagrangian Multipliers at that), I believe that it's worth mentioning that if the side length is not given this particular problem actually has a closed form solution.
We would like to minimise the distance between the corners of our fitted square and the ones of the given quadrilateral. This is equivalent to minimising the cost function:
f(x1,...,y4) = (x1-ax)^2+(y1-ay)^2 + (x2-bx)^2+(y2-by)^2 +
(x3-cx)^2+(y3-cy)^2 + (x4-dx)^2+(y4-dy)^2
Where Pi = (xi,yi) are the corners of the fitted square and A = (ax,ay) through D = (dx,dy) represent the given corners of the quadrilateral in clockwise order. Since we are fitting a square we have certain contraints regarding the positions of the four corners. Actually, if two opposite corners are given, they are enough to describe a unique square (save for the mirror image on the diagonal).
Parametrization of the points
This means that two opposite corners are enough to represent our target square. We can parametrise the two remaining corners using the components of the first two. In the above example we express P2 and P4 in terms of P1 = (x1,y1) and P3 = (x3,y3). If you need a visualisation of the geometrical intuition behind the parametrisation of a square you can play with the interactive version.
P2 = (x2,y2) = ( (x1+x3-y3+y1)/2 , (y1+y3-x1+x3)/2 )
P4 = (x4,y4) = ( (x1+x3+y3-y1)/2 , (y1+y3+x1-x3)/2 )
Substituting for x2,x4,y2,y4 means that f(x1,...,y4) can be rewritten to:
f(x1,x3,y1,y3) = (x1-ax)^2+(y1-ay)^2 + ((x1+x3-y3+y1)/2-bx)^2+((y1+y3-x1+x3)/2-by)^2 +
(x3-cx)^2+(y3-cy)^2 + ((x1+x3+y3-y1)/2-dx)^2+((y1+y3+x1-x3)/2-dy)^2
a function which only depends on x1,x3,y1,y3. To find the minimum of the resulting function we then set the partial derivatives of f(x1,x3,y1,y3) equal to zero. They are the following:
df/dx1 = 4x1-dy-dx+by-bx-2ax = 0 --> x1 = ( dy+dx-by+bx+2ax)/4
df/dx3 = 4x3+dy-dx-by-bx-2cx = 0 --> x3 = (-dy+dx+by+bx+2cx)/4
df/dy1 = 4y1-dy+dx-by-bx-2ay = 0 --> y1 = ( dy-dx+by+bx+2ay)/4
df/dy3 = 4y3-dy-dx-2cy-by+bx = 0 --> y3 = ( dy+dx+by-bx+2cy)/4
You may see where this is going, as simple rearrangment of the terms leads to the final solution.
Final solution
Related
Consider a piecewise cubic Bezier curve with N segments, defined in terms of 4N control points. How can I determine whether this curve passes the Vertical Line Test? That is: do there exist points x,y1,y2 such that y1!=y2 and both (x,y1) and (x,y2) lie on the curve? Additionally, it would be nice but not essential to return the values of the points x,y1,y2 if such points exist.
In principle, one could just evaluate the curve explicitly at a large number of points, but in my application the curves can have a large number of segments so this is prohibitively slow. Thus I am looking for an algorithm that operates on the control points only, and does not rely on explicitly evaluating the curves at a large number of points.
Closed curves will fail your test by definition, so let's look at open curves: for a curve to fail your vertical line test, at least one segment in your piecewise cubic polyBezier needs to "reverse direction" along the x-axis, which means its x component needs to have extrema in the inclusive interval [0,1].
For example, this curve doesn't (the right side shows the component function for x, with global extrema in red and inflections in purple):
(Also note that we're only drawing the [0,1] interval, but if we were to extend it, those global extrema wouldn't actually be at t=0 and t=1. This is in fact critically important, and we'll see why, below)
This curve also doesn't (but only just):
But this curve does:
As does this one. Twice, in fact:
As does a curve "just past its cusp":
Which is easier to see when we exaggerate it:
This means we need to find the first derivative, find out where it's zero, and then make sure that (because Beziers can have cusps) the sign of that derivative flips at that point (or points, as there can be two).
Turns out, the derivative is trivially not-even-really-computed, so for each segment we have:
w = [x1, x2, x3, x4]
Bx(t) = w[0] * (1-t)³ + 3 * w[1] * t(1-t)² + w[2] * t²(1-t) + w[3] * t³
// bezier form of the derivative:
v = [
3 * (w[1] - w[0]),
3 * (w[2] - w[1]),
3 * (w[3] - w[2]),
]
Bx'(t) = v[0] * (1-t)² + 2 * v[1] * t(1-t) + v[2] * t²
Which we can trivially rewrite to polynomial form:
a = v[0] - 2*v[1] + v[2]
b = 2 * (v[1] - v[0])
c = v[0]
Bx'(t) = a * t² + b * t + c
So we find the roots for that, which is a matter of applying the quadratic formula, which gives us 0, 1, or 2 roots.
if (a == 0) there are no roots
denominator = 2 * a
discriminant = b * b - 2 * denominator * c
if (discriminant < 0) there are no (real) roots
d = sqrt(discriminant)
t1 = -(b + d) / denominator
if (0 ≤ t1 ≤ 1) then t1 is a valid root
t2 = (d - b) / denominator
if (0 ≤ t2 ≤ 1) then t2 is a valid root
If there are no (real) roots, then this segment does not cause the curve to fail your vertical line test, and we move on to the next segment and repeat until we've either found a failure, or we've run out of segments to test.
If it's 1 or 2 roots, we check that the signs of Bx'(t-ε) and Bx'(t+ε) for some very small value of ε differ (because we want to make sure we don't conclude that the above curve with a cusp fails the test: it has a zero derivative but it "keeps going in the same direction" across that root instead of flipping direction). If they do, this segment is (one of) the segment(s) that makes your curve fail your vertical line test.
Also, note that we're testing even if the root is at 0 or 1: the piecewise curve might inflect "across segments", and we can take advantage of the fact that we can evaluate Bx'(t) for t = -ε or t = 1+ε to see if we flip direction, even if we never draw the curve prior to t=0 or after t=1.
Leaving solution here for reference. Not especially elegant but gets the job done. Assume that the curve is parametrized from left to right. The key observation is that the curve intersects a vertical at two points if and only if there is a point at which the derivative x'(t) of the x component is strictly negative. For a cubic Bezier curve the derivative is a quadratic polynomial x'(t)=at^2+bt+c. So we just need to check whether the minimum of this quadratic over the interval 0<=t<=1 is negative, which is straightforward (if a bit tedious) using basic algebra.
I have the x, y co-ordinates of a point on a rotated image by certain angle. I want to find the co-ordinates of the same point in the original, non-rotated image.
Please check the first image which is simpler:
UPDATED image, SIMPLIFIED:
OLD image:
Let's say the first point is A, the second is B and the last is C. I assume you have the rotation matrice R (see Wikipedia Rotation Matrix if not) et the translation vector t, so that B = R*A and C = B+t.
It comes C = R*A + t, and so A = R^1*(C-t).
Edit: If you only need the non rotated new point, simply do D = R^-1*C.
First thing to do is defining the reference system (how "where the points lies with respect to each image" will be translated into numbers). I guess that you want to rely on a basic 2D reference system, given by a single point (a couple of X/Y values). For example: left/lower corner (min. X and min. Y).
The algorithm is pretty straightforward:
Getting the new defining reference point associated with the
rotated shape (min. X and min. Y), that is, determining RefX_new and
RefY_new.
Applying a basic conversion between reference systems:
X_old = X_new + (RefX_new - RefX_old)
Y_old = Y_new + (RefY_new -
RefY_old)
----------------- UPDATE TO RELATE FORMULAE TO NEW CAR PIC
RefX_old = min X value of the CarFrame before being rotated.
RefY_old = max Y value of the CarFrame before being rotated.
RefX_new = min X value of the CarFrame after being rotated.
RefY_new = max Y value of the CarFrame after being rotated.
X_new = X of the point with respect to the CarFrame after being rotated. For example: if RefX_new = 5 with respect to absolute frame (0,0) and X of the point with respect to this absolute frame is 8, X_new would be 3.
Y_new = Y of the point with respect to CarFrame after being rotated (equivalently to point above)
X_old_C = X_new_C(respect to CarFrame) + (RefX_new(CarFrame_C) - RefX_old(CarFrame_A))
Y_old_C = Y_new_C(respect to CarFrame) + (RefY_new(CarFrame_C) - RefY_old(CarFrame_A))
These coordinates are respect to the CarFrame and thus you might have to update them with respect to the absolute frame (0,0, I guess), as explained above, that is:
X_old_D_absolute_frame = X_old_C + (RefX_new(CarFrame_C) + RefX_global(i.e., 0))
Y_old_D_absolute_frame = Y_old_C + (RefY_new(CarFrame_C) + RefY_global(i.e., 0))
(Although you should do that once the CarFrame is in its "definitive position" with respect to the global frame, that is, on picture D (the point has the same coordinates with respect to the CarFrame in both picture C and D, but different ones with respect to the global frame).)
It might seem a bit complex put in this way; but it is really simple. You have just to think carefully about one case and create the algorithm performing all the actions. The idea is extremely simple: if I am on 8 inside something which starts in 5; I am on 3 with respect to the container.
------------ UPDATE IN THE METHODOLOGY
As said in the comment, these last pictures prove that the originally-proposed calculation of reference (max. Y/min. X) is not right: it shouldn't be the max./min. values of the carFrame but the minimum distances to the closer sides (= perpendicular line from the left/bottom side to the point).
------------ TRIGONOMETRIC CALCS FOR THE SPECIFIC EXAMPLE
The algorithm proposed is the one you should apply in any situation. Although in this specific case, the most difficult part is not moving from one reference system to the other, but defining the reference point in the rotated system. Once this is done, the application to the non-rotated case is immediate.
Here you have some calcs to perform this action (I have done it pretty quickly, thus better take it as an orientation and do it by your own); also I have only considered the case in the pictures, that is, rotation over the left/bottom point:
X_rotated = dx * Cos(alpha)
where dx = X_orig - (max_Y_CarFrame - Y_Orig) * Tan(alpha)
Y_rotated = dy * Cos(alpha)
where dy = Y_orig - X_orig * Tan(alpha)
NOTE: (max_Y_CarFrame - Y_Orig) in dx and X_orig in dy expect that the basic reference system is 0,0 (min. X and min. Y). If this is not the case, you would have to change this variables.
The X_rotated and Y_rotated give the perpendicular distance from the point to the closest side of the carFrame (respectively, left and bottom side). By applying these formulae (I insist: analyse them carefully), you get the X_old_D_absolute_frame/Y_old_D_absolute_frame that is, you have just to add the lef/bottom values from the carFrame (if it is located in 0,0, these would be the final values).
How do I find the closest intersection in 2D between a ray:
x = x0 + t*cos(a), y = y0 + t*sin(a)
and m polylines:
{(x1,y1), (x2,y2), ..., (xn,yn)}
QUICKLY?
I started by looping trough all linesegments and for each linesegment;
{(x1,y1),(x2,y2)} solving:
x1 + u*(x2-x1) = x0 + t*cos(a)
y1 + u*(y2-y1) = y0 + t*sin(a)
by Cramer's rule, and afterward sorting the intersections on distance, but that was slow :-(
BTW: the polylines happens to be monotonically increasing in x.
Coordinate system transformation
I suggest you first transform your setup to something with easier coordinates:
Take your point p = (x, y).
Move it by (-x0, -y0) so that the ray now starts at the center.
Rotate it by -a so that the ray now lies on the x axis.
So far the above operations have cost you four additions and four multiplications per point:
ca = cos(a) # computed only once
sa = sin(a) # likewise
x' = x - x0
y' = y - y0
x'' = x'*ca + y'*sa
y'' = y'*ca - x'*sa
Checking for intersections
Now you know that a segment of the polyline will only intersect the ray if the sign of its y'' value changes, i.e. y1'' * y2'' < 0. You could even postpone the computation of the x'' values until after this check. Furthermore, the segment will only intersect the ray if the intersection of the segment with the x axis occurs for x > 0, which can only happen if either value is greater than zero, i.e. x1'' > 0 or x2'' > 0. If both x'' are greater than zero, then you know there is an intersection.
The following paragraph is kind of optional, don't worry if you don't understand it, there is an alternative noted later on.
If one x'' is positive but the other is negative, then you have to check further. Suppose that the sign of y'' changed from negative to positive, i.e. y1'' < 0 < y2''. The line from p1'' to p2'' will intersect the x axis at x > 0 if and only if the triangle formed by p1'', p2'' and the origin is oriented counter-clockwise. You can determine the orientation of that triangle by examining the sign of the determinant x1''*y2'' - x2''*y1'', it will be positive for a counter-clockwise triangle. If the direction of the sign change is different, the orientation has to be different as well. So to take this together, you can check whether
(x1'' * y2'' - x2'' * y1'') * y2'' > 0
If that is the case, then you have an intersection. Notice that there were no costly divisions involved so far.
Computing intersections
As you want to not only decide whether an intersection exists, but actually find a specific one, you now have to compute that intersection. Let's call it p3. It must satisfy the equations
(x2'' - x3'')/(y2'' - y3'') = (x1'' - x3'')/(y1'' - y3'') and
y3'' = 0
which results in
x3'' = (x1'' * y1'' - x2'' * y2'')/(y1'' - y2'')
Instead of the triangle orientation check from the previous paragraph, you could always compute this x3'' value and discard any results where it turns out to be negative. Less code, but more divisions. Benchmark if in doubt about performance.
To find the point closest to the origin of the ray, you take the result with minimal x3'' value, which you can then transform back into its original position:
x3 = x3''*ca + x0
y3 = x3''*sa + y0
There you are.
Note that all of the above assumed that all numbers were either positive or negative. If you have zeros, it depends on the exact interpretation of what you actually want to compute, how you want to handle these border cases.
To avoid checking intersection with all segments, some space partition is needed, like Quadtree, BSP tree. With space partition it is needed to check ray intersection with space partitions.
In this case, since points are sorted by x-coordinate, it is possible to make space partition with boxes (min x, min y)-(max x, max y) for parts of polyline. Root box is min-max of all points, and it is split in 2 boxes for first and second part of a polyline. Number of segments in parts is same or one box has one more segment. This box splitting is done recursively until only one segment is in a box.
To check ray intersection start with root box and check is it intersected with a ray, if it is than check 2 sub-boxes for an intersection and first test closer sub-box then farther sub-box.
Checking ray-box intersection is checking if ray is crossing axis aligned line between 2 positions. That is done for 4 box boundaries.
I need a function that returns points on a circle in three dimensions.
The circle should "cap" a line segment defined by points A and B and it's radius. each cap is perpendicular to the line segment. and centered at one of the endpoints.
Here is a shitty diagram
Let N be the unit vector in the direction from A to B, i.e., N = (B-A) / length(A-B). The first step is to find two more vectors X and Y such that {N, X, Y} form a basis. That means you want two more vectors so that all pairs of {N, X, Y} are perpendicular to each other and also so that they are all unit vectors. Another way to think about this is that you want to create a new coordinate system whose x-axis lines up with the line segment. You need to find vectors pointing in the direction of the y-axis and z-axis.
Note that there are infinitely many choices for X and Y. You just need to somehow find two that work.
One way to do this is to first find vectors {N, W, V} where N is from above and W and V are two of (1,0,0), (0,1,0), and (0,0,1). Pick the two vectors for W and V that correspond to the smallest coordinates of N. So if N = (.31, .95, 0) then you pick (1,0,0) and (0,0,1) for W and V. (Math geek note: This way of picking W and V ensures that {N,W,V} spans R^3). Then you apply the Gram-Schmidt process to {N, W, V} to get vectors {N, X, Y} as above. Note that you need the vector N to be the first vector so that it doesn't get changed by the process.
So now you have two vectors that are perpendicular to the line segment and perpendicular to each other. This means the points on the circle around A are X * cos t + Y * sin t + A where 0 <= t < 2 * pi. This is exactly like the usual description of a circle in two dimensions; it is just written in the new coordinate system described above.
As David Norman noted the crux is to find two orthogonal unit vectors X,Y that are orthogonal to N. However I think a simpler way to compute these is by finding the householder reflection Q that maps N to a multiple of (1,0,0) and then to take as X the image of (0,1,0) under Q and Y as the image of (0,0,1) under Q. While this might sound complicated it comes down to:
s = (N[0] > 0.0) ? 1.0 : -1.0
t = N[0] + s; f = -1.0/(s*t);
X[0] = f*N[1]*t; X[1] = 1 + f*N[1]*N[1]; X[2] = f*N[1]*N[2];
Y[0] = f*N[2]*t; Y[1] = f*N[1]*N[2]; Y[2] = 1 + f*N[2]*N[2];
Given two image buffers (assume it's an array of ints of size width * height, with each element a color value), how can I map an area defined by a quadrilateral from one image buffer into the other (always square) image buffer? I'm led to understand this is called "projective transformation".
I'm also looking for a general (not language- or library-specific) way of doing this, such that it could be reasonably applied in any language without relying on "magic function X that does all the work for me".
An example: I've written a short program in Java using the Processing library (processing.org) that captures video from a camera. During an initial "calibrating" step, the captured video is output directly into a window. The user then clicks on four points to define an area of the video that will be transformed, then mapped into the square window during subsequent operation of the program. If the user were to click on the four points defining the corners of a door visible at an angle in the camera's output, then this transformation would cause the subsequent video to map the transformed image of the door to the entire area of the window, albeit somewhat distorted.
Using linear algebra is much easier than all that geometry! Plus you won't need to use sine, cosine, etc, so you can store each number as a rational fraction and get the exact numerical result if you need it.
What you want is a mapping from your old (x,y) co-ordinates to your new (x',y') co-ordinates. You can do it with matrices. You need to find the 2-by-4 projection matrix P such that P times the old coordinates equals the new co-ordinates. We'll assume that you're mapping lines to lines (not, for instance, straight lines to parabolas). Because you have a projection (parallel lines don't stay parallel) and translation (sliding), you need a factor of (xy) and (1), too. Drawn as matrices:
[x ]
[a b c d]*[y ] = [x']
[e f g h] [x*y] [y']
[1 ]
You need to know a through h so solve these equations:
a*x_0 + b*y_0 + c*x_0*y_0 + d = i_0
a*x_1 + b*y_1 + c*x_1*y_1 + d = i_1
a*x_2 + b*y_2 + c*x_2*y_2 + d = i_2
a*x_3 + b*y_3 + c*x_3*y_3 + d = i_3
e*x_0 + f*y_0 + g*x_0*y_0 + h = j_0
e*x_1 + f*y_1 + g*x_1*y_1 + h = j_1
e*x_2 + f*y_2 + g*x_2*y_2 + h = j_2
e*x_3 + f*y_3 + g*x_3*y_3 + h = j_3
Again, you can use linear algebra:
[x_0 y_0 x_0*y_0 1] [a e] [i_0 j_0]
[x_1 y_1 x_1*y_1 1] * [b f] = [i_1 j_1]
[x_2 y_2 x_2*y_2 1] [c g] [i_2 j_2]
[x_3 y_3 x_3*y_3 1] [d h] [i_3 j_3]
Plug in your corners for x_n,y_n,i_n,j_n. (Corners work best because they are far apart to decrease the error if you're picking the points from, say, user-clicks.) Take the inverse of the 4x4 matrix and multiply it by the right side of the equation. The transpose of that matrix is P. You should be able to find functions to compute a matrix inverse and multiply online.
Where you'll probably have bugs:
When computing, remember to check for division by zero. That's a sign that your matrix is not invertible. That might happen if you try to map one (x,y) co-ordinate to two different points.
If you write your own matrix math, remember that matrices are usually specified row,column (vertical,horizontal) and screen graphics are x,y (horizontal,vertical). You're bound to get something wrong the first time.
EDIT
The assumption below of the invariance of angle ratios is incorrect. Projective transformations instead preserve cross-ratios and incidence. A solution then is:
Find the point C' at the intersection of the lines defined by the segments AD and CP.
Find the point B' at the intersection of the lines defined by the segments AD and BP.
Determine the cross-ratio of B'DAC', i.e. r = (BA' * DC') / (DA * B'C').
Construct the projected line F'HEG'. The cross-ratio of these points is equal to r, i.e. r = (F'E * HG') / (HE * F'G').
F'F and G'G will intersect at the projected point Q so equating the cross-ratios and knowing the length of the side of the square you can determine the position of Q with some arithmetic gymnastics.
Hmmmm....I'll take a stab at this one. This solution relies on the assumption that ratios of angles are preserved in the transformation. See the image for guidance (sorry for the poor image quality...it's REALLY late). The algorithm only provides the mapping of a point in the quadrilateral to a point in the square. You would still need to implement dealing with multiple quad points being mapped to the same square point.
Let ABCD be a quadrilateral where A is the top-left vertex, B is the top-right vertex, C is the bottom-right vertex and D is the bottom-left vertex. The pair (xA, yA) represent the x and y coordinates of the vertex A. We are mapping points in this quadrilateral to the square EFGH whose side has length equal to m.
Compute the lengths AD, CD, AC, BD and BC:
AD = sqrt((xA-xD)^2 + (yA-yD)^2)
CD = sqrt((xC-xD)^2 + (yC-yD)^2)
AC = sqrt((xA-xC)^2 + (yA-yC)^2)
BD = sqrt((xB-xD)^2 + (yB-yD)^2)
BC = sqrt((xB-xC)^2 + (yB-yC)^2)
Let thetaD be the angle at the vertex D and thetaC be the angle at the vertex C. Compute these angles using the cosine law:
thetaD = arccos((AD^2 + CD^2 - AC^2) / (2*AD*CD))
thetaC = arccos((BC^2 + CD^2 - BD^2) / (2*BC*CD))
We map each point P in the quadrilateral to a point Q in the square. For each point P in the quadrilateral, do the following:
Find the distance DP:
DP = sqrt((xP-xD)^2 + (yP-yD)^2)
Find the distance CP:
CP = sqrt((xP-xC)^2 + (yP-yC)^2)
Find the angle thetaP1 between CD and DP:
thetaP1 = arccos((DP^2 + CD^2 - CP^2) / (2*DP*CD))
Find the angle thetaP2 between CD and CP:
thetaP2 = arccos((CP^2 + CD^2 - DP^2) / (2*CP*CD))
The ratio of thetaP1 to thetaD should be the ratio of thetaQ1 to 90. Therefore, calculate thetaQ1:
thetaQ1 = thetaP1 * 90 / thetaD
Similarly, calculate thetaQ2:
thetaQ2 = thetaP2 * 90 / thetaC
Find the distance HQ:
HQ = m * sin(thetaQ2) / sin(180-thetaQ1-thetaQ2)
Finally, the x and y position of Q relative to the bottom-left corner of EFGH is:
x = HQ * cos(thetaQ1)
y = HQ * sin(thetaQ1)
You would have to keep track of how many colour values get mapped to each point in the square so that you can calculate an average colour for each of those points.
I think what you're after is a planar homography, have a look at these lecture notes:
http://www.cs.utoronto.ca/~strider/vis-notes/tutHomography04.pdf
If you scroll down to the end you'll see an example of just what you're describing. I expect there's a function in the Intel OpenCV library which will do just this.
There is a C++ project on CodeProject that includes source for projective transformations of bitmaps. The maths are on Wikipedia here. Note that so far as i know, a projective transformation will not map any arbitrary quadrilateral onto another, but will do so for triangles, you may also want to look up skewing transforms.
If this transformation has to look good (as opposed to the way a bitmap looks if you resize it in Paint), you can't just create a formula that maps destination pixels to source pixels. Values in the destination buffer have to be based on a complex averaging of nearby source pixels or else the results will be highly pixelated.
So unless you want to get into some complex coding, use someone else's magic function, as smacl and Ian have suggested.
Here's how would do it in principle:
map the origin of A to the origin of B via a traslation vector t.
take unit vectors of A (1,0) and (0,1) and calculate how they would be mapped onto the unit vectors of B.
this gives you a transformation matrix M so that every vector a in A maps to M a + t
invert the matrix and negate the traslation vector so for every vector b in B you have the inverse mapping b -> M-1 (b - t)
once you have this transformation, for each point in the target area in B, find the corresponding in A and copy.
The advantage of this mapping is that you only calculate the points you need, i.e. you loop on the target points, not the source points. It was a widely used technique in the "demo coding" scene a few years back.