I want an efficient algorithm to fill polygon with an Image, I want to fill an Image into Trapezoid. currently I am doing it in two steps
1) First Perform StretchBlt on Image,
2) Perform Column by Column vertical StretchBlt,
Is there any better method to implement this? Is there any Generic and Fast algorithm which can fill any polygon?
Thanks,
Sunny
I can't help you with the distortion part, but filling polygons is pretty simple, especially if they are convex.
For each Y scan line have a table indexed by Y, containing a minX and maxX.
For each edge, run a DDA line-drawing algorithm, and use it to fill in the table entries.
For each Y line, now you have a minX and maxX, so you can just fill that segment of the scan line.
The hard part is a mental trick - do not think of coordinates as specifying pixels. Think of coordinates as lying between the pixels. In other words, if you have a rectangle going from point 0,0 to point 2,2, it should light up 4 pixels, not 9. Most problems with polygon-filling revolve around this issue.
ADDED: OK, it sounds like what you're really asking is how to stretch the image to a non-rectangular shape (but trapezoidal). I would do it in terms of parameters s and t, going from 0 to 1. In other words, a location in the original rectangle is (x + w0*s, y + h0*t). Then define a function such that s and t also map to positions in the trapezoid, such as ((x+t*a) + w0*s*(t-1) + w1*s*t, y + h1*t). This defines a coordinate mapping between the two shapes. Then just scan x and y, converting to s and t, and mapping points from one to the other. You probably want to have a little smoothing filter rather than a direct copy.
ADDED to try to give a better explanation:
I'm supposing both your rectangle and trapezoid have top and bottom edges parallel with the X axis. The lower-left corner of the rectangle is <x0,y0>, and the lower-left corner of the trapezoid is <x1,y1>. I assume the rectangle's width and height are <w,h>.
For the trapezoid, I assume it has height h1, and that it's lower width is w0, while it's upper width is w1. I assume it's left edge "slants" by a distance a, so that the position of its upper-left corner is <x1+a, y1+h1>. Now suppose you iterate <x,y> over the rectangle. At each point, compute s = (x-x0)/w, and t = (y-y0)/h, which are both in the range 0 to 1. (I'll let you figure out how to do that without using floating point.) Then convert that to a coordinate in the trapezoid, as xt = ((x1 + t*a) + s*(w0*(1-t) + w1*t)), and yt = y1 + h1*t. Then <xt,yt> is the point in the trapezoid corresponding to <x,y> in the rectangle. Now I'll let you figure out how to do the copying :-) Good luck.
P.S. And please don't forget - coordinates fall between pixels, not on them.
Would it be feasible to sidestep the problem and use OpenGL to do this for you? OpenGL can render to memory contexts and if you can take advantage of any hardware acceleration by doing this that'll completely dwarf any code tweaks you can make on the CPU (although on some older cards memory context rendering may not be able to take advantage of the hardware).
If you want to do this completely in software MESA may be an option.
Related
Here is a excerpt from Peter Shirley's Fundamentals of computer graphics:
11.1.2 Texture Arrays
We will assume the two dimensions to be mapped are called u and v.
We also assume we have an nx and ny image that we use as the texture.
Somehow we need every (u,v) to have an associated color found from the
image. A fairly standard way to make texturing work for (u,v) is to
first remove the integer portion of (u,v) so that it lies in the unit
square. This has the effect of "tiling" the entire uv plane with
copies of the now-square texture. We then use one of the three
interpolation strategies to compute the image color for the
coordinates.
My question is: What are the integer portion of (u,v)? I thought u,v are 0 <= u,v <= 1.0. If there is an integer portion, shouldn't we be dividing u,v by the texture image width and height to get the normalized u,v values?
UV values can be less than 0 or greater than 1. The reason for dropping the integer portion is that UV values use the fractional part when indexing textures, where (0,0), (0,1), (1,0) and (1,1) correspond to the texture's corners. Allowing UV values to go beyond 0 and 1 is what enables the "tiling" effect to work.
For example, if you have a rectangle whose corners are indexed with the UV points (0,0), (0,2), (2,0), (2,2), and assuming the texture is set to tile the rectangle, then four copies of the texture will be drawn on that rectangle.
The meaning of a UV value's integer part depends on the wrapping mode. In OpenGL, for example, there are at least three wrapping modes:
GL_REPEAT - The integer part is ignored and has no meaning. This is what allows textures to tile when UV values go beyond 0 and 1.
GL_MIRRORED_REPEAT - The fractional part is mirrored if the integer part is odd.
GL_CLAMP_TO_EDGE - Values greater than 1 are clamped to 1, and values less than 0 are clamped to 0.
Peter O's answer is excellent. I want to add a high level point that the coordinate systems used in graphics are a convention that people just stick to as a defacto standard-- there's no law of nature here and it is arbitrary (but a decent standard thank goodness). I think one reason texture mapping is often confusing is that the arbitrariness of this stardard isn't obvious. This is that the image has a de facto coordinate system on the unit square [0,1]^2. Give me a (u,v) on the unit square and I will tell you a point in the image (for example, (0.2,0.3) is 20% to the right and 30% up from the bottom-left corner of the image). But what if you give me a (u,v) that is outside [0,1]^2 like (22.7, -13.4)? Some rule is used to make that on [0.1]^2, and the GL modes described are just various useful hacks to deal with that case.
My maths skills are terrible so I don't even know where to start with this. This is for a hobby project written in C#.
To keep things simple, let's say I need to operate on all of the pixels positioned inside an ellipse. How would I get an array of the valid pixel locations inside the ellipse that I need to work with?
For that task i would recommend taking a look at bresenhams filled circle Algorithm.
If you scale the y achsis you can use it to draw ellipses, too.
Bresenham algorithms work by using only integer arithmetic, which makes them fast(est)
This works only for axe-parallel ellipses
In an ellipse the sum of the distance between a point in the ellipse and both foci is twice the major axis so:
PF1 + PF2 = 2a
Where P is the point, F1 and F2 the foci and a the semi major axis.
If the sum is less then 2a the point will be inside the ellispe.
Wikipedia
Problem specification:
I have a rectangular and uniformly spaced image of pixels with vertex coordinates (i,j), (i+1,j), (i, j+1), (i+1, j+1) [i=0,...,m-1; j=0,...,n-1] and a polygon P with vertex coordinates (x_1,y_1), ..., (x_n, y_n). Now I want to efficiently compute the percentage of every pixel overlapping with P. P can be non-convex, or even self-intersection.
Essentially, this is a "soft" generalization of the scan-line rasterization algorithms which check efficiently if the pixel centers lie inside / outside the polygon.
I can think of the following approaches:
(1) Upsample the image (e.g. by a factor 10*10), count how many subpixel centers lie inside the polygon, and divide by 100. Problems: time efficiency, memory efficiency, accuracy.
(2) Use the scan-line algorithm on a slightly bigger and by (0.5,0.5) translated grid to compute the pixels that lie fully inside / outside, create a list of "borderline" pixels, walk counter-clockwise along the edges and compute the intersection areas with all pixels along the way. Problems: requires subtle coding, easy to introduce bugs.
My question: Has anybody already encountered this problem, and do you know a third, superior approach? And if not, have you made better experiences with (1) or with (2)? I assume that this problem may arise in the context of antialiasing?
Doing the exact geometric analysis might not be too difficult.
Deal with those pixels that are partially covered by the polygon first: you can use a technique from ray-tracing to quickly find all pixels that intersect with the polygon edges. You can then use the Cohen-Sutherland algorithm to efficiently find the points of intersection between the edge and the pixel, and hence you can compute the area of coverage for that pixel.
Note that you can avoid one of the two clipping operations involved in Cohen-Sutherland as adjacent pixels will share a segment intersection point. For instance - if you have two adjacent pixels, A and B that intersect with a segment p->q at points a1, a2, b1 and b2, then a2 and b1 will be the same. Passing the segment a2->q into the routine when clipping against B should avoid repeating work.
You'll have to treat the pixels that contain the polygon vertices specially, but again it shouldn't be too tricky: Cohen-Sutherland will help here as well.
Self-intersecting polygons will also throw up some special cases to handle - pixels that intersect with two or more edges. I can easily imagine that handling these exactly in all cases might get tricky, so I'd be tempted to just do the upsampling approach here.
Once these edge pixels have been identified, you can do the standard scan-line thing to fill in the polygon's interior pixels.
edit: Actually, now that I think more about it, you can totally skip the Cohen-Sutherland step. The algorithm in the linked paper can be easily extended to return the intersection points between the segment and the pixel grid. The segment will leave a given pixel at min( tMaxX, tMaxY ). Keep track of the last exit point to re-use as the entry point for the next pixel.
I would do
1a) Upsample when the pixel is partly overlapping:
but not the whole image, only the current pixel to be checked, or all pixels in the current scan line if that helps.
Than there is no memory argument.
speed? up to 16x16 i dont think that speed is an issue.
I've used the following code to make 3 points, draw them to a bitmap, then draw the bitmap to the main form, however it seems to always draw point 3 before point 2, because the Y co-ordinate is lower then point 2's. Is there a way to get over this, as I need a curve that curves up and down, rather than just up
Bitmap bit = new Bitmap(490, 490);
Graphics g = Graphics.FromImage(bit);
Graphics form = this.CreateGraphics();
pntPoints[0] = this.pictureBox1.Location;
pntPoints[1] = new Point(100,300);
pntPoints[2] = new Point(200, 150);
g.DrawCurve(p, pntPoints);
form.DrawImage(bit, 0, 5);
bit.Dispose();
g.Dispose();
Y-coordinate for point 3 is not lower, it's actually higher. The (0;0) point of Graphics is in the left top corner, and the Y value increases from the top down rather than from the bottom up. So a point (0;100) will be higher than (0;200) on the result image.
If you want a curve that goes up then down, you should place your first point in (0; 489), your second point in (100, 190) and your third point in (200, 340).
I recommend you put in a debug function that will mark and identify the points themselves, so you can see exactly where they are. A pixel in an off color, the index of the point, and the coordinates together will help you identify what is going where.
Now, I'm wondering, are those two points really supposed to be absolute, or are they supposed to be relative to the first point?
I've spent a good amount of time getting intersections working correctly between various 2D shapes (circle-circle, circle-tri, circle-rect, rect-rect - a huge thanks to those who've solved such problems from which I drew my solutions from) for a simple project and am now in the process of trying to implement an triangle-AABB intersection test.
I'm a bit stuck however. I've tried searching online and thinking it through however I've been unable to get any ideas. The thing that's given me the biggest issue at the moment is checking whether the edges of triangle (which is an isosceles btw) intersect the rectangle when no vertexes lie within the rectangle.
Any ideas how I could get this working?
EDIT: To give a bit more insight as to stages as I think they should occur:
1 - Check to see if any vertexes lie with in the rectangle (this part is easy). If yes, collision, otherwise continue.
2 - Check to see if any edges are intersecting the rectangle. This is where I'm stuck. I have little idea how to implement this.
I'd calculate a collection of equations which define the 4 lines of the rectangle, and then solve against a collection of equations which define lines of the triangle.
For example, gievn a rectangle with lowest point (x1, y1) and one side having a gradient of g, one of the lines of the rectangle will be y = gx + y1. Find equations to represent the other 3 sides of the rectangle as well.
The lines which form the sides of the triangle will be calculated similarly. The equation for a line given two points is
y - y1 = (x - x1) * (y2 - y1)/(x2 - x1)
If there are any possible x & y values that satisy all 7 equations then you have an intersection.
edit: I realise that although this is a simple algorithm it might be tricky to code; another option is to calculate formulae for the intervals that form each edge (essentially lines with a min and max value) and solve these.