how to determine polygon rotation angle - geometry

I am writing a program (.net) to create a stadium style layout and need to determine the angle of rotation for each polygon compared to the horizontal.
This is so i can construct the contents of the polygon and also rotate this correctly to fit inside.
Given the below image as an example to simulate each variant of the facing direction (indicated by the red line) how could i determine the the rotation angle needed to get the shape to have the red line on top as is already shown by shape 5.
http://i40.tinypic.com/16ifhoo.gif
I have found logic to determine the angle of the points that make up the red line, but I also need to know the rotation to get it back to horizontal.
I'm not sure if i need some central reference point for all polygons to help.
How could I best solve this?

If you know the angle of the red line for some polygon (a, say), then the polygon is on one side or other of that line. So:
Use the average colour of some pixels near the line on both sides to determine which is the case.
If the polygon is above the line, the rotation angle is 180+a.
If the polygon is below the line, the rotation is a.
where above and below correspond to the smaller-angle side and larger-angle sides of the line according to how you measure a.

I would try to calculate the normal vectors on each red line (eg. 0 degrees for polygon 5, 45 degrees for 4, 90 degrees for 3, etc.) and then the angle you need to rotate that normal - and thus the matching polygon - so that the normal "points up" should be very simple.
Unfortunately I don't have the needed formulae available for you off the top of my head, but Googling "normal vector" and/or searching for it on Wikipedia should get you started just fine, I think. Possibly in the direction of the so called 'cross product'.
No central reference point for all polygons should be needed for this (normal direction is not related to absolute coordinates).

sin, cos, tan functions allow you to convert from triangle edge ratio to degrees.
Imagine, one end of red line is at (x1,y1) and other end is at (x2,y2). You can treat red line as hipotenuse of rectangular triangle and use arctan to get degrees.
Ratio between catheti is (x2-x1) / (y2 - y1). Rotation of red line then is arctan((x2-x1) / (y2 - y1)). Watch out for situations when y1-y1 is 0!
Let's try one example from your picture, polygon 6 with coords (55, 65) and (65, 55). Type in google: "arctan((65-55)/(55-65)) in degrees"

Related

fill between two radius of circle by geometry shader with radius

I'm confused in using geometry shader for solve this problem:
I have two radius of a circle that every radius is a line and I called that's a line. every line is made of list of points that every points have a color.
so how fill between two lines with lines that's color of every point on a line has between colors of points of two lines on a equal radius.
it is so may be complex so I show it with a picture:
gradient Radiuses
.
I think best way of solving this problem is geometry shader but if you know better way that have good performance, I listen.
thanks.

What is the reference point for measuring angles in OpenCV?

I'm trying to infer an object's direction of movement using dense optical flow in OpenCV. I'm using calcOpticalFlowFarneback() to get flow coordinates and cartToPolar() to acquire vector angles which would indicate direction.
To interpret the results I need to know the reference point for measuring the angle. I have found this blog post indicating that the range of angles is 360°. That tells me that the angle measurement would go along the lines of the unit circle. I couldn't make out much more than that.
The documentation for cartToPolar() doesn't cover this and my attempts at testing it have failed.
It seems that the angle produced by cartToPolar() is in reference to the unit circle rotated clockwise by 90° centered on the image coordinate starting point in the top left corner. It would look like this.
I came to this conclusion by using the dense optical flow example provided by OpenCV. I replaced the line hsv[...,0] = ang*180/np.pi/2 with hsv[...,0] = ang*180/np.pi to get correct angle conversion from radians. Then I tested a video with people moving from top right to bottom left and vice versa. I sampled the dominant color with GIMP and got RGB values which I converted to HSV values. Hue value corresponds to the angle in degrees.
People moving from top right to bottom left produced an angle of about 300° and people moving the other way round produced an angle of about 120°. This hinted at the way the unit circle is positioned.
Looking at the code, fastAtan32f is used to compute the angles. and that seems to be a atan2 implementation.

Find degrees (0-360º) of point on a circle

I'm working on a small webapp in which I need to rotate shapes. I
would like to achieve this by grabbing a point on a circle and
dragging it around to rotate the image.
Here's a quick illustration to help explain things:
My main circle can be dragged anywhere on the canvas. I know it's
radius (r) and where 12 o'clock (p0) will always be (cx, cy - r). What
I need to know is what degree p1 will be (0-360º) so I can rotate the
contents of the main circle accordingly with Raphael.rotate().
I've run through a bunch of different JavaScript formulations to find this (example), but none seem to give me values between 0-360 and my basic math skills
are woefully deficient.
The Color Picker demo (sliding the cursor along the ring on the right) has the behavior I want, but even after poring over the source code I can't seem to replicate it accurately.
Anything to point me in the correct direction would be appreciated.
// Angle between the center of the circle and p1,
// measured in degrees counter-clockwise from the positive X axis (horizontal)
( Math.atan2(p1.y-cy,p1.x-cx) * 180/Math.PI + 360 ) % 360
The angle between the center of the circle and p0 will always be +90°. See Math.atan2 for more details.

Calculate rectangle width and height from diagonal and rotation

I have a rotate rectangle and I know the size of the diagonal. I also know the angle used to rotate the rectangle.
How can I calculate the width and height of the rectangle?
For a sketch of the problem, see:
1) create a new line starting at one of the end-points of the diagonal and travelling at the rotation angle.
2) project the other diagonal terminus onto this line. You now know one side of the rectangle.
3) Copy the segment to the other side of the diagonal and connect the endpoints to complete the rectangle.
The only 'tricky' code here is the projection. This webpage has some example code for Point-Line distance/projection: http://softsurfer.com/Archive/algorithm_0102/algorithm_0102.htm
Thanks David Rutten,
I got it working. Your site about the projection was to much for my math knowledge, but with some google i find a nice point to line intersection function which helped me to calc the length (distance) of one of the sides.
Unfortunately I'm too "new" here to award you with credits or reply on your anwser.
#Eric bainville: I knew the distance because i had point1 and point2 (upper left and bottom right) coordinates. With those coordinates it is possible. I didnt mention this, but luckily David guessed right that I knew them.
Thanks again!

Algorithm for Polygon Image Fill

I want an efficient algorithm to fill polygon with an Image, I want to fill an Image into Trapezoid. currently I am doing it in two steps
1) First Perform StretchBlt on Image,
2) Perform Column by Column vertical StretchBlt,
Is there any better method to implement this? Is there any Generic and Fast algorithm which can fill any polygon?
Thanks,
Sunny
I can't help you with the distortion part, but filling polygons is pretty simple, especially if they are convex.
For each Y scan line have a table indexed by Y, containing a minX and maxX.
For each edge, run a DDA line-drawing algorithm, and use it to fill in the table entries.
For each Y line, now you have a minX and maxX, so you can just fill that segment of the scan line.
The hard part is a mental trick - do not think of coordinates as specifying pixels. Think of coordinates as lying between the pixels. In other words, if you have a rectangle going from point 0,0 to point 2,2, it should light up 4 pixels, not 9. Most problems with polygon-filling revolve around this issue.
ADDED: OK, it sounds like what you're really asking is how to stretch the image to a non-rectangular shape (but trapezoidal). I would do it in terms of parameters s and t, going from 0 to 1. In other words, a location in the original rectangle is (x + w0*s, y + h0*t). Then define a function such that s and t also map to positions in the trapezoid, such as ((x+t*a) + w0*s*(t-1) + w1*s*t, y + h1*t). This defines a coordinate mapping between the two shapes. Then just scan x and y, converting to s and t, and mapping points from one to the other. You probably want to have a little smoothing filter rather than a direct copy.
ADDED to try to give a better explanation:
I'm supposing both your rectangle and trapezoid have top and bottom edges parallel with the X axis. The lower-left corner of the rectangle is <x0,y0>, and the lower-left corner of the trapezoid is <x1,y1>. I assume the rectangle's width and height are <w,h>.
For the trapezoid, I assume it has height h1, and that it's lower width is w0, while it's upper width is w1. I assume it's left edge "slants" by a distance a, so that the position of its upper-left corner is <x1+a, y1+h1>. Now suppose you iterate <x,y> over the rectangle. At each point, compute s = (x-x0)/w, and t = (y-y0)/h, which are both in the range 0 to 1. (I'll let you figure out how to do that without using floating point.) Then convert that to a coordinate in the trapezoid, as xt = ((x1 + t*a) + s*(w0*(1-t) + w1*t)), and yt = y1 + h1*t. Then <xt,yt> is the point in the trapezoid corresponding to <x,y> in the rectangle. Now I'll let you figure out how to do the copying :-) Good luck.
P.S. And please don't forget - coordinates fall between pixels, not on them.
Would it be feasible to sidestep the problem and use OpenGL to do this for you? OpenGL can render to memory contexts and if you can take advantage of any hardware acceleration by doing this that'll completely dwarf any code tweaks you can make on the CPU (although on some older cards memory context rendering may not be able to take advantage of the hardware).
If you want to do this completely in software MESA may be an option.

Resources