Related
I'm trying to draw simple scaled points in my custom graphics engine. The points are scaled in pixel space, and the radius of the points are in pixels, but the position of the points fed to the draw function are in world coordinates.
So far, everything is working great, except for a depth clipping issue. The points are of constant size, regardless of how far away they are, which is done by offsetting the vertices in projected/clip space. However, when they are close to surfaces, they partially intersect them in the depth buffer.
Since these points represent world coordinates, I want them to use the depth buffer, and be hidden behind objects that are in front of them. However, when the point is close to a surface, I want to push it toward the camera, so it doesn't partially intersect it. I think it is easier to just always do this push, regardless of the point being close to a surface. What makes the most sense to me is to just push it by its radius, so that all of its vertices are exactly far enough away to avoid clipping into nearby surfaces.
The easiest way I've found to do this is to simply subtract from the Z value in the vertex shader, after transforming into view-projection space. However, I'm having some trouble converting my pixel radius into a depth offset. Regardless of the math I use, what works close up never seems to work far away. I'm thinking maybe this is due to how the z buffer is non-linear, but could be wrong.
Currently, the closest I've been to solving this is the following:
proj_vertex_pos.z -= point_pixel_radius / proj_vertex_pos.w * 100.0
I'm honestly not sure why 100.0 helps make this work yet. I added it simply because dividing the radius by w was too small of a value. Can anyone point me in the right direction? How do I convert my pixel distance into a depth distance? Especially if the depth distance changes scale depending on which depth you are at? Or am I just way off?
The solution was to convert my pixel space radius into world space units, since the z-buffer is still in world space, even after transforming by the view-projection transform. This can be done by converting pixels into a factor (factor = pixels / screen_size), then convert the factor into world space units, which was a little more involved - I had to calculate the world-space size of the screen at a given distance, then multiply the factor by that to get world units. I can post the related code if anyone needs it. There's probably a simpler way to calculate it, but my brain always goes straight for factors.
The reason I was getting different results at different distances was mainly because I was only offsetting the z component of the clip position by the result. It's also necessary to offset the w component, to make the depth offset work at any distance (linear). However, in order to offset the w component, you first have to scale xy by w, modify w as needed, then divide xy by the new w. This resulted in making the math pretty involved, so I changed the strategy to offset the vertex before clip space, which requires calculating the distance to the camera in Z space manually, but it honestly ended up being about the same amount of math either way.
Here is the final vertex shader at the moment. Hopefully the global values make sense. I did not modify this to post it, so please forgive any sillyness in my comments. EDIT: I had to make some edits to this, because I was accidentally moving the vertex along the camera-Z direction instead of directly toward the camera:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// compute offset from vertex to camera
float3 to_cam_offset = Scene.CamPos - vin.Position.xyz;
// compute the Z distance of the camera from the vertex
float cam_z_dist = -dot( Scene.CamZ, to_cam_offset );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * cam_z_dist * 2.0;
// finally, push the vertex toward the camera by the world radius
// + note: moving by radius will only work with surfaces facing the camera, since we are moving toward the camera, rather than away from the surface
// + because of this, we also multiply by another 4, to compensate for nearby surface angles, but there is no scale that would work for every angle
float3 offset = normalize(to_cam_offset) * (radius_world * -4.0);
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz + offset, 1.0) );
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
Here is the other version that offsets z & w instead of changing things in world space. After edits above, this is probably the more optimal solution:
lerpPoint main(vinBake vin)
{
// prepare output
lerpPoint pin;
// extract radius/size from input
pin.InRadius = vin.TexCoord.y;
// generate projected position
// + after this, x=-1 is left, x=+1 is right, y=-1 is bottom, and y=+1 is top of screen
// + note that after this transform, w represents "distance from camera", and z represents "distance from near plane", both in world space
pin.ClipPos = mul( Scene.ViewProj, float4( vin.Position.xyz, 1.0) );
// compute the radius factor
// + this describes what percentage of the screen is covered by our radius
// + this removes it from pixel space into factor-space
float radius_fac = Scene.InvScreenRes.x * pin.InRadius;
// compute world-space radius by scaling with FieldFactor
// + FieldFactor.x represents the world-space-width of the camera view at whatever distance we scale it by
// + here, we scale FieldFactor.x by the camera z distance, which gives us the world radius, in world units
// + we must multiply by 2 because FieldFactor.x only represents HALF of the screen
float radius_world = radius_fac * Scene.FieldFactor.x * pin.ClipPos.w * 2.0;
// offset depth by our world radius
// + we scale this extra to compensate for surfaces with high angles relative to the camera (since we are moving directly at it)
// + notice we have to make the perspective divide before modifying w, then re-apply it after, or xy will be off
pin.ClipPos.xy /= pin.ClipPos.w;
pin.ClipPos.z -= radius_world * 10.0;
pin.ClipPos.w -= radius_world * 10.0;
pin.ClipPos.xy *= pin.ClipPos.w;
// calculate radius of point, in clip space from our radius factor
// + we scale by 2 to convert pixel radius into clip-radius
float clip_radius = radius_fac * 2.0 * pin.ClipPos.w;
// compute scaled clip-space offset and apply it to our clip-position
// + vin.Prop.xy: -1,-1 = bottom-left, -1,1 = top left, 1,-1 = bottom right, 1,1 = top right (note: in clip-space, +1 = top, -1 = bottom)
// + we scale by clipping depth (part of clip_radius) to retain constant scale, but this will give us a VERY LARGE result
// + we scale by inverter resolution (clip_radius) to convert our input screen scale (eg, 1->1024) into a clip scale (eg, 0.001 to 1.0 )
pin.ClipPos.x += vin.Prop.x * clip_radius;
pin.ClipPos.y += vin.Prop.y * clip_radius * Scene.Aspect;
// return result
return pin;
}
I have been measuring the change in orientation of a 3D shape (a 3-space sensor) with different starting positions for a controlled end position.
I am using Euler angles, (P)itch, (R)oll and (Y)aw.
My measure of orientation change Is:
Orientation change OC = |dP| + |dR| + |dY|
The 3 starting positions differ only in sensor roll in degrees:
1). 0
2). 45
3). 90
With each starting position the sensor is tared then elevated to 30 degrees along the roll axis.
The problem is that for 1). & 3). as expected I get OC = 30, representing only pitch and only yaw error of 30 degrees respectively. However for 2). OC is significantly >30 being a sum of non-zero pitch, roll and yaw.
Is this as expected? Assuming it is, is there a better measure of OC which is not sensitive to starting position?
The answer supplied by Nico in the comments:
Was to use a quaternion distance, this was the dot product of the initial and final quaternions which my sensor supplied as unit quaternions where x2 + y2 + z2 + w2 = 1
The dot product was given by:
dot = xi.xf + yi.yf + zi.zf + wi.wf
Furthermore I used the relative angle theta (radians) between these quaternions given by:
Theta = 2cos-1(dot)
The sensors gave greatly improved results using theta, which only improved more when they had a gradient descent calibration applied.
I have a 4x4 camera matrix comprised of right, up, forward and position vectors.
I raytrace the scene with the following code that I found in a tutorial but don't really entirely understand it:
for (int i = 0; i < m_imageSize.width; ++i)
{
for (int j = 0; j < m_imageSize.height; ++j)
{
u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;
Ray ray(cameraPosition, normalize(u*cameraRight + v*cameraUp + 1 / tanf(m_verticalFovAngleRadian) *cameraForward));
I have a couple of questions:
How can I find the focal length of my raytracing camera?
Where is my image plane?
Why cameraForward needs to be multiplied with this 1/tanf(m_verticalFovAngleRadian)?
Focal length is a property of lens systems. The camera model that this code uses, however, is a pinhole camera, which does not use lenses at all. So, strictly speaking, the camera does not really have a focal length. The corresponding optical properties are instead expressed as the field of view (the angle that the camera can observe; usually the vertical one). You could calculate the focal length of a camera that has an equivalent field of view with the following formula (see Wikipedia):
FOV = 2 * arctan (x / 2f)
FOV diagonal field of view
x diagonal of film; by convention 24x36 mm -> x=43.266 mm
f focal length
There is no unique image plane. Any plane that is perpendicular to the view direction can be seen as the image plane. In fact, the projected images differ only in their scale.
For your last question, let's take a closer look at the code:
u = (i + .5f) / (float)(m_imageSize.width - 1) - .5f;
v = (m_imageSize.height - 1 - j + .5f) / (float)(m_imageSize.height - 1) - .5f;
These formulas calculate u/v coordinates between -0.5 and 0.5 for every pixel, assuming that the entire image fits in the box between -0.5 and 0.5.
u*cameraRight + v*cameraUp
... is just placing the x/y coordinates of the ray on the pixel.
... + 1 / tanf(m_verticalFovAngleRadian) *cameraForward
... is defining the depth component of the ray and ultimately the depth of the image plane you are using. Basically, this is making the ray steeper or shallower. Assume that you have a very small field of view, then 1/tan(fov) is a very large number. So, the image plane is very far away, which produces exactly this small field of view (when keeping the size of the image plane constant since you already set the x/y components). On the other hand, if the field of view is large, the image plane moves closer. Note that this notion of image plane is only conceptual. As I said, all other image planes are equally valid and would produce the same image. Another way (and maybe a more intuitive one) to specify the ray would be
u * tanf(m_verticalFovAngleRadian) * cameraRight
+ v * tanf(m_verticalFovAngleRadian) * cameraUp
+ 1 * cameraForward));
As you see, this is exactly the same ray (just scaled). The idea here is to set the conceptual image plane to a depth of 1 and scale the x/y components to adapt the size of the image plane. tan(fov) (with fov being the half field of view) is exactly the size of the half image plane at a depth of 1. Just draw a triangle to verify that. Note that this code is only able to produce square image planes. If you want to allow rectangular ones, you need to take into account the ratio of the side lengths.
So here's a little bit of geometry for you. I've been stuck on this for a while now:
I need to write a script (in C#, but feel free to answer in whatever script you'd like) that generates random points. A points has to values, x and y.
I must generate N points total (where N > 1 and is also randomly up to 100).
point 1 must be x = 0, y = 0. point 2 must be of distance 1 from point 1. So that Root(x2 + y2) = 1.
point 3 must be of distance 1 from point 2 and so on and so forth.
Now here's the tricky part - point N must be of distance 1 from point 1. So if you were to connect all points into a single shape, you'd get a closed shape with each vertices being the same length.
(vertices may cross and you may even have two points at exactly the same location. As long as it's random).
Any idea how you'd do that?
I would do it with simulation of chain there are 2 basic ways one is start from regular polygon and then randomize one point a bit (rotate a bit) then iterate the rest to maintain the segment size=1.
The second one is start with full random open chain (like in MBo answer) and then iteratively change the angles until the last point is on desired distance from first point. I think the second approach is a bit simpler to code...
If you want something more complicated then you can generate M random points and handle them as closed Bezier curve cubic patches loop control points. Then just find N equidistant points on it (this is hard task) and rescale the whole thing to match segment line size = 1
If you want to try first approach then
Regular polygon start (closed loop)
Start with regular polygon (equidistant points on circle). So divide circle to N angular segments. Select radius r so line length match l=1
so r=0.5/cos(pi/N) ... from half angle triangle
Make function to rotate i-th point by some single small step
So just rotate the i-th point around (i-1)th point with radius 1 and then iteratively change the {i+1,...N} points to match segments sizes
you can exploit symmetry to avoid bullet #2
but this will lead not to very random result for small N. Just inverse rotation of 2 touching segments for random point p(i) and loop this many times.
to make it more random you can apply symmetry on whole parts (between 2 random points) instead of on 2 lines only
The second approach is like this:
create randomized open chain (like in MBo's answer)
so all segments are already with size=1.0. Remember also the angle not just position
i-th point iteration
for simplicity let the points be called p1,p2,...pn
compute d0=||pn-p1|-1.0|
rotate point pi left by some small da angle step
compute dl=||pn-p1|-1.0|
rotate point pi right by 2.0*da
compute dr=||pn-p1|-1.0|
rotate point pi to original position ... left by da
now chose direction closer to the solution (min dl,dr,d0) so:
if d0 is minimal do not change this point at all and stop
if dl is minimal then rotate left by da while dl is lowering
if dr is minimal then rotate right by da while dr is lowering
solution
loop bullet #2 while the d=||pn-p0|-1.0| is lowering then change da to da*=0.1 and loop again. Stop if da step is too small or no change in d after loop iteration.
[notes]
Booth solutions are not precise your distances will be very close to 1.0 but can be +/- some error dependent on the last da step size. If you rotate point pi then just add/sub angle to all pi,pi+1,pi+2,..pn points
Edit: This is not an answer, closeness has not been taken into account.
It is known that Cos(Fi)^2 + Sin(Fi)^2 = 1 for any angle Fi
So you may use the next approach:
P[0].X = 0
P[0].Y = 0
for i = 1 .. N - 1:
RandomAngle = 2 * Pi * Random(0..1)
P[i].X = P[i-1].X + Cos(RandomAngle)
P[i].Y = P[i-1].Y + Sin(RandomAngle)
So, I've been struggling with a frankly now infuriating problem all day today.
Given a set of verticies of a triangle on a plane (just 3 points, 6 free parameters), I need to calculate the area of intersection of this triangle with the unit square defined by {0,0} and {1,1}. (I choose this because any square in 2D can be transformed to this, and the same transformation can move the 3 vertices).
So, now the problem is simplified down to only 6 parameters, 3 points... which I think is short enough that I'd be willing to code up the full solution / find the full solution.
( I would like this to run on a GPU for literally more than 2 million triangles every <0.5 seconds, if possible. as for the need for simplification / no data structures / libraries)
In terms of my attempt at the solution, I've... got a list of ways I've come up with, none of which seem fast or ... specific to the nice case (too general).
Option 1: Find the enclosed polygon, it can be anything from a triangle up to a 6-gon. Do this by use of some intersection of convex polygon in O(n) time algorithms that I found. Then I would sort these intersection points (new vertices, up to 7 of them O(n log n) ), in either CW or CCw order, so that I can run a simple area algorithm on the points (based on Greens function) (O(n) again). This is the fastest i can come with for an arbitrary convex n-gon intersecting with another m-gon. However... my problem is definitely not that complex, its a special case, so it should have a better solution...
Option 2:
Since I know its a triangle and unit square, i can simply find the list of intersection points in a more brute force way (rather than using some algorithm that is ... frankly a little frustrating to implement, as listed above)
There are only 19 points to check. 4 points are corners of square inside of triangle. 3 points are triangle inside square. And then for each line of the triangle, each will intersect 4 lines from the square (eg. y=0, y=1, x=0, x=1 lines). that is another 12 points. so, 12+3+4 = 19 points to check.
Once I have the, at most 6, at fewest 3, points that do this intersection, i can then follow up with one of two methods that I can think of.
2a: Sort them by increasing x value, and simply decompose the shape into its sub triangle / 4-gon shapes, each with an easy formula based on the limiting top and bottom lines. sum up the areas.
or 2b: Again sort the intersection points in some cyclic way, and then calculate the area based on greens function.
Unfortunately, this still ends up being just as complex as far as I can tell. I can start breaking up all the cases a little more, for finding the intersection points, since i know its just 0s and 1s for the square, which makes the math drop out some terms.. but it's not necessarily simple.
Option 3: Start separating the problem based on various conditions. Eg. 0, 1, 2, or 3 points of triangle inside square. And then for each case, run through all possible number of intersections, and then for each of those cases of polygon shapes, write down the area solution uniquely.
Option 4: some formula with heaviside step functions. This is the one I want the most probably, I suspect it'll be a little... big, but maybe I'm optimistic that it is possible, and that it would be the fastest computationally run time once I have the formula.
--- Overall, I know that it can be solved using some high level library (clipper for instance). I also realize that writing general solutions isn't so hard when using data structures of various kinds (linked list, followed by sorting it). And all those cases would be okay, if I just needed to do this a few times. But, since I need to run it as an image processing step, on the order of >9 * 1024*1024 times per image, and I'm taking images at .. lets say 1 fps (technically I will want to push this speed up as fast as possible, but lower bound is 1 second to calculate 9 million of these triangle intersection area problems). This might not be possible on a CPU, which is fine, I'll probably end up implementing it in Cuda anyways, but I do want to push the limit of speed on this problem.
Edit: So, I ended up going with Option 2b. Since there are only 19 intersections possible, of which at most 6 will define the shape, I first find those 3 to 6 verticies. Then i sort them in a cyclic (CCW) order. And then I find the area by calculating the area of that polygon.
Here is my test code I wrote to do that (it's for Igor, but should be readable as pseudocode) Unfortunately it's a little long winded, but.. I think other than my crappy sorting algorithm (shouldn't be more than 20 swaps though, so not so much overhead for writing better sorting)... other than that sorting, I don't think I can make it any faster. Though, I am open to any suggestions or oversights I might have had in chosing this option.
function calculateAreaUnitSquare(xPos, yPos)
wave xPos
wave yPos
// First, make array of destination. Only 7 possible results at most for this geometry.
Make/o/N=(7) outputVertexX = NaN
Make/o/N=(7) outputVertexY = NaN
variable pointsfound = 0
// Check 4 corners of square
// Do this by checking each corner against the parameterized plane described by basis vectors p2-p0 and p1-p0.
// (eg. project onto point - p0 onto p2-p0 and onto p1-p0. Using appropriate parameterization scaling (not unit).
// Once we have the parameterizations, then it's possible to check if it is inside the triangle, by checking that u and v are bounded by u>0, v>0 1-u-v > 0
variable denom = yPos[0]*xPos[1]-xPos[0]*yPos[1]-yPos[0]*xPos[2]+yPos[1]*xPos[2]+xPos[0]*yPos[2]-xPos[1]*yPos[2]
//variable u00 = yPos[0]*xPos[1]-xPos[0]*yPos[1]-yPos[0]*Xx+yPos[1]*Xx+xPos[0]*Yx-xPos[1]*Yx
//variable v00 = -yPos[2]*Xx+yPos[0]*(Xx-xPos[2])+xPos[0]*(yPos[2]-Yx)+yPos[2]*Yx
variable u00 = (yPos[0]*xPos[1]-xPos[0]*yPos[1])/denom
variable v00 = (yPos[0]*(-xPos[2])+xPos[0]*(yPos[2]))/denom
variable u01 =(yPos[0]*xPos[1]-xPos[0]*yPos[1]+xPos[0]-xPos[1])/denom
variable v01 =(yPos[0]*(-xPos[2])+xPos[0]*(yPos[2]-1)+xPos[2])/denom
variable u11 = (yPos[0]*xPos[1]-xPos[0]*yPos[1]-yPos[0]+yPos[1]+xPos[0]-xPos[1])/denom
variable v11 = (-yPos[2]+yPos[0]*(1-xPos[2])+xPos[0]*(yPos[2]-1)+xPos[2])/denom
variable u10 = (yPos[0]*xPos[1]-xPos[0]*yPos[1]-yPos[0]+yPos[1])/denom
variable v10 = (-yPos[2]+yPos[0]*(1-xPos[2])+xPos[0]*(yPos[2]))/denom
if(u00 >= 0 && v00 >=0 && (1-u00-v00) >=0)
outputVertexX[pointsfound] = 0
outputVertexY[pointsfound] = 0
pointsfound+=1
endif
if(u01 >= 0 && v01 >=0 && (1-u01-v01) >=0)
outputVertexX[pointsfound] = 0
outputVertexY[pointsfound] = 1
pointsfound+=1
endif
if(u10 >= 0 && v10 >=0 && (1-u10-v10) >=0)
outputVertexX[pointsfound] = 1
outputVertexY[pointsfound] = 0
pointsfound+=1
endif
if(u11 >= 0 && v11 >=0 && (1-u11-v11) >=0)
outputVertexX[pointsfound] = 1
outputVertexY[pointsfound] = 1
pointsfound+=1
endif
// Check 3 points for triangle. This is easy, just see if its bounded in the unit square. if it is, add it.
variable i = 0
for(i=0; i<3; i+=1)
if(xPos[i] >= 0 && xPos[i] <= 1 )
if(yPos[i] >=0 && yPos[i] <=1)
if(!((xPos[i] == 0 || xPos[i] == 1) && (yPos[i] == 0 || yPos[i] == 1) ))
outputVertexX[pointsfound] = xPos[i]
outputVertexY[pointsfound] = yPos[i]
pointsfound+=1
endif
endif
endif
endfor
// Check intersections.
// Procedure is: loop over 3 lines of triangle.
// For each line
// Check if vertical
// If not vertical, find y intercept with x=0 and x=1 lines.
// if y intercept is between 0 and 1, then add the point
// Check if horizontal
// if not horizontal, find x intercept with y=0 and y=1 lines
// if x intercept is between 0 and 1, then add the point
for(i=0; i<3; i+=1)
variable iN = mod(i+1,3)
if(xPos[i] != xPos[iN])
variable tx0 = xPos[i]/(xPos[i] - xPos[iN])
variable tx1 = (xPos[i]-1)/(xPos[i] - xPos[iN])
if(tx0 >0 && tx0 < 1)
variable yInt = (yPos[iN]-yPos[i])*tx0+yPos[i]
if(yInt > 0 && yInt <1)
outputVertexX[pointsfound] = 0
outputVertexY[pointsfound] = yInt
pointsfound+=1
endif
endif
if(tx1 >0 && tx1 < 1)
yInt = (yPos[iN]-yPos[i])*tx1+yPos[i]
if(yInt > 0 && yInt <1)
outputVertexX[pointsfound] = 1
outputVertexY[pointsfound] = yInt
pointsfound+=1
endif
endif
endif
if(yPos[i] != yPos[iN])
variable ty0 = yPos[i]/(yPos[i] - yPos[iN])
variable ty1 = (yPos[i]-1)/(yPos[i] - yPos[iN])
if(ty0 >0 && ty0 < 1)
variable xInt = (xPos[iN]-xPos[i])*ty0+xPos[i]
if(xInt > 0 && xInt <1)
outputVertexX[pointsfound] = xInt
outputVertexY[pointsfound] = 0
pointsfound+=1
endif
endif
if(ty1 >0 && ty1 < 1)
xInt = (xPos[iN]-xPos[i])*ty1+xPos[i]
if(xInt > 0 && xInt <1)
outputVertexX[pointsfound] = xInt
outputVertexY[pointsfound] = 1
pointsfound+=1
endif
endif
endif
endfor
// Now we have all 6 verticies that we need. Next step: find the lowest y point of the verticies
// if there are multiple with same low y point, find lowest X of these.
// swap this vertex to be first vertex.
variable lowY = 1
variable lowX = 1
variable m = 0;
for (i=0; i<pointsfound ; i+=1)
if (outputVertexY[i] < lowY)
m=i
lowY = outputVertexY[i]
lowX = outputVertexX[i]
elseif(outputVertexY[i] == lowY)
if(outputVertexX[i] < lowX)
m=i
lowY = outputVertexY[i]
lowX = outputVertexX[i]
endif
endif
endfor
outputVertexX[m] = outputVertexX[0]
outputVertexY[m] = outputVertexY[0]
outputVertexX[0] = lowX
outputVertexY[0] = lowY
// now we have the bottom left corner point, (bottom prefered).
// calculate the cos(theta) of unit x hat vector to the other verticies
make/o/N=(pointsfound) angles = (p!=0)?( (outputVertexX[p]-lowX) / sqrt( (outputVertexX[p]-lowX)^2+(outputVertexY[p]-lowY)^2) ) : 0
// Now sort the remaining verticies based on this angle offset. This will orient the points for a convex polygon in its maximal size / ccw orientation
// (This sort is crappy, but there will be in theory, at most 25 swaps. Which in the grand sceme of operations, isn't so bad.
variable j
for(i=1; i<pointsfound; i+=1)
for(j=i+1; j<pointsfound; j+=1)
if( angles[j] > angles[i] )
variable tempX = outputVertexX[j]
variable tempY = outputVertexY[j]
outputVertexX[j] = outputVertexX[i]
outputVertexY[j] =outputVertexY[i]
outputVertexX[i] = tempX
outputVertexY[i] = tempY
variable tempA = angles[j]
angles[j] = angles[i]
angles[i] = tempA
endif
endfor
endfor
// Now the list is ordered!
// now calculate the area given a list of CCW oriented points on a convex polygon.
// has a simple and easy math formula : http://www.mathwords.com/a/area_convex_polygon.htm
variable totA = 0
for(i = 0; i<pointsfound; i+=1)
totA += outputVertexX[i]*outputVertexY[mod(i+1,pointsfound)] - outputVertexY[i]*outputVertexX[mod(i+1,pointsfound)]
endfor
totA /= 2
return totA
end
I think the Cohen-Sutherland line-clipping algorithm is your friend here.
First off check the bounding box of the triangle against the square to catch the trivial cases (triangle inside square, triangle outside square).
Next check for the case where the square lies completely within the triangle.
Next consider your triangle vertices A, B and C in clockwise order. Clip the line segments AB, BC and CA against the square. They will either be altered such that they lie within the square or are found to lie outside, in which case they can be ignored.
You now have an ordered list of up to three line segments that define the some of the edges intersection polygon. It is easy to work out how to traverse from one edge to the next to find the other edges of the intersection polygon. Consider the endpoint of one line segment (e) against the start of the next (s)
If e is coincident with s, as would be the case when a triangle vertex lies within the square, then no traversal is required.
If e and s differ, then we need to traverse clockwise around the boundary of the square.
Note that this traversal will be in clockwise order, so there is no need to compute the vertices of the intersection shape, sort them into order and then compute the area. The area can be computed as you go without having to store the vertices.
Consider the following examples:
In the first case:
We clip the lines AB, BC and CA against the square, producing the line segments ab>ba and ca>ac
ab>ba forms the first edge of the intersection polygon
To traverse from ba to ca: ba lies on y=1, while ca does not, so the next edge is ca>(1,1)
(1,1) and ca both lie on x=1, so the next edge is (1,1)>ca
The next edge is a line segment we already have, ca>ac
ac and ab are coincident, so no traversal is needed (you might be as well just computing the area for a degenerate edge and avoiding the branch in these cases)
In the second case, clipping the triangle edges against the square gives us ab>ba, bc>cb and ca>ac. Traversal between these segments is trivial as the start and end points lie on the same square edges.
In the third case the traversal from ba to ca goes through two square vertices, but it is still a simple matter of comparing the square edges on which they lie:
ba lies on y=1, ca does not, so next vertex is (1,1)
(1,1) lies on x=1, ca does not, so next vertex is (1,0)
(1,0) lies on y=0, as does ca, so next vertex is ca.
Given the large number of triangles I would recommend scanline algorithm: sort all the points 1st by X and 2nd by Y, then proceed in X direction with a "scan line" that keeps a heap of Y-sorted intersections of all lines with that line. This approach has been widely used for Boolean operations on large collections of polygons: operations such as AND, OR, XOR, INSIDE, OUTSIDE, etc. all take O(n*log(n)).
It should be fairly straightforward to augment Boolean AND operation, implemented with the scanline algorithm to find the areas you need. The complexity will remain O(n*log(n)) on the number of triangles. The algorithm would also apply to intersections with arbitrary collections of arbitrary polygons, in case you would need to extend to that.
On the 2nd thought, if you don't need anything other than the triangle areas, you could do that in O(n), and scanline may be an overkill.
I came to this question late, but I think I've come up with a more fully flushed out solution along the lines of ryanm's answer. I'll give an outline of for others trying to do this problem at least somewhat efficiently.
First you have two trivial cases to check:
1) Triangle lies entirely within the square
2) Square lies entirely within the triangle (Just check if all corners are inside the triangle)
If neither is true, then things get interesting.
First, use either the Cohen-Sutherland or Liang-Barsky algorithm to clip each edge of the triangle to the square. (The linked article contains a nice bit of code that you can essentially just copy-paste if you're using C).
Given a triangle edge, these algorithms will output either a clipped edge or a flag denoting that the edge lies entirely outside the square. If all edges lie outsize the square, then the triangle and the square are disjoint.
Otherwise, we know that the endpoints of the clipped edges constitute at least some of the vertices of the polygon representing the intersection.
We can avoid a tedious case-wise treatment by making a simple observation. All other vertices of the intersection polygon, if any, will be corners of the square that lie inside the triangle.
Simply put, the vertices of the intersection polygon will be the (unique) endpoints of the clipped triangle edges in addition to the corners of the square inside the triangle.
We'll assume that we want to order these vertices in a counter-clockwise fashion. Since the intersection polygon will always be convex, we can compute its centroid (the mean over all vertex positions) which will lie inside the polygon.
Then to each vertex, we can assign an angle using the atan2 function where the inputs are the y- and x- coordinates of the vector obtained by subtracting the centroid from the position of the vertex (i.e. the vector from the centroid to the vertex).
Finally, the vertices can be sorted in ascending order based on the values of the assigned angles, which constitutes a counter-clockwise ordering. Successive pairs of vertices correspond to the polygon edges.