Suppose that I have a large map (say 20000x20000) defined by a large number (20000) of triangle obstacles (with integer coordinates) that may overlap.
Example map:
Does exist a fast way to generate a boolean grid 20000x20000 in which the cell (i, j) is 0 if it is in the free space and 1 if it is (mostly) inside a triangle?
The simplest way - exploit facilities of any graphic library - draw these triangles by white color onto black background using 8-bit (for 8-bit boolean type) bitmap. Thus bitmap memory will contain 0/0xFF values in needed places.
You can also use own triangle rasterization routine to fill boolean matrix.
It seems that reverse approach - checking whether every point belongs to some triangle - should be slower even with spatial data structure for triangles
Related
Image morphing is mostly a graphic design SFX to adapt one picture into another one using some points decided by the artist, who has to match the eyes some key zones on one portrait with another, and then some kinds of algorithms adapt the entire picture to change from one to another.
I would like to do something a bit similar with a shader, which can load any 2 graphics and automatically choose zones of the most similar colors in the same kinds of zone of the picture and automatically morph two pictures in real time processing. Perhaps a shader based version would be logically alot faster at the task? except I don't even understand how it works at all.
If you know, Please don't worry about a complete reply about the process, it would be great if you have save vague background concepts and keywords, for how to attempt a 2d texture morph in a graphics shader.
There are more morphing methods out there the one you are describing is based on geometry.
morph by interpolation
you have 2 data sets with similar properties (for example 2 images are both 2D) and interpolate between them by some parameter. In case of 2D images you can use linear interpolation if both images are the same resolution or trilinear interpolation if not.
So you just pick corresponding pixels from each images and interpolate the actual color for some parameter t=<0,1>. for the same resolution something like this:
for (y=0;y<img1.height;y++)
for (x=0;x<img1.width;x++)
img.pixel[x][y]=(1.0-t)*img1.pixel[x][y] + t*img2.pixel[x][y];
where img1,img2 are input images and img is the ouptput. Beware the t is float so you need to overtype to avoid integer rounding problems or use scale t=<0,256> and correct the result by bit shift right by 8 bits or by /256 For different sizes you need to bilinear-ly interpolate the corresponding (x,y) position in both of the source images first.
All This can be done very easily in fragment shader. Just bind the img1,img2 to texture units 0,1 pick the texel from them interpolate and output the final color. The bilinear coordinate interpolation is done automatically by GLSL because texture coordinates are normalized to <0,1> no matter the resolution. In Vertex you just pass the texture and vertex coordinates. And in main program side you just draw single Quad covering the final image output...
morph by geometry
You have 2 polygons (or matching points) and interpolate their positions between the 2. For example something like this: Morph a cube to coil. This is suited for vector graphics. you just need to have points corespondency and then the interpolation is similar to #1.
for (i=0;i<points;i++)
{
p(i).x=(1.0-t)*p1.x + t*p2.x
p(i).y=(1.0-t)*p1.y + t*p2.y
}
where p1(i),p2(i) is i-th point from each input geometry set and p(i) is point from the final result...
To enhance visual appearance the linear interpolation is exchanged with specific trajectory (like BEZIER curves) so the morph look more cool. For example see
Path generation for non-intersecting disc movement on a plane
To acomplish this you need to use geometry shader (or maybe even tesselation shader). you would need to pass both polygons as single primitive, then geometry shader should interpolate the actual polygon and pass it to vertex shader.
morph by particle swarms
In this case you find corresponding pixels in source images by matching colors. Then handle each pixel as particle and create its path from position in img1 to img2 with parameter t. It i s the same as #2 but instead polygon areas you got just points. The particle has its color,position you interpolate both ... because there is very slim chance you will get exact color matches and the count ... (histograms would be the same) which is in-probable.
hybrid morphing
It is any combination of #1,#2,#3
I am sure there is more methods for morphing these are just the ones I know of. Also the morphing can be done not only in spatial domain...
Here is a excerpt from Peter Shirley's Fundamentals of computer graphics:
11.1.2 Texture Arrays
We will assume the two dimensions to be mapped are called u and v.
We also assume we have an nx and ny image that we use as the texture.
Somehow we need every (u,v) to have an associated color found from the
image. A fairly standard way to make texturing work for (u,v) is to
first remove the integer portion of (u,v) so that it lies in the unit
square. This has the effect of "tiling" the entire uv plane with
copies of the now-square texture. We then use one of the three
interpolation strategies to compute the image color for the
coordinates.
My question is: What are the integer portion of (u,v)? I thought u,v are 0 <= u,v <= 1.0. If there is an integer portion, shouldn't we be dividing u,v by the texture image width and height to get the normalized u,v values?
UV values can be less than 0 or greater than 1. The reason for dropping the integer portion is that UV values use the fractional part when indexing textures, where (0,0), (0,1), (1,0) and (1,1) correspond to the texture's corners. Allowing UV values to go beyond 0 and 1 is what enables the "tiling" effect to work.
For example, if you have a rectangle whose corners are indexed with the UV points (0,0), (0,2), (2,0), (2,2), and assuming the texture is set to tile the rectangle, then four copies of the texture will be drawn on that rectangle.
The meaning of a UV value's integer part depends on the wrapping mode. In OpenGL, for example, there are at least three wrapping modes:
GL_REPEAT - The integer part is ignored and has no meaning. This is what allows textures to tile when UV values go beyond 0 and 1.
GL_MIRRORED_REPEAT - The fractional part is mirrored if the integer part is odd.
GL_CLAMP_TO_EDGE - Values greater than 1 are clamped to 1, and values less than 0 are clamped to 0.
Peter O's answer is excellent. I want to add a high level point that the coordinate systems used in graphics are a convention that people just stick to as a defacto standard-- there's no law of nature here and it is arbitrary (but a decent standard thank goodness). I think one reason texture mapping is often confusing is that the arbitrariness of this stardard isn't obvious. This is that the image has a de facto coordinate system on the unit square [0,1]^2. Give me a (u,v) on the unit square and I will tell you a point in the image (for example, (0.2,0.3) is 20% to the right and 30% up from the bottom-left corner of the image). But what if you give me a (u,v) that is outside [0,1]^2 like (22.7, -13.4)? Some rule is used to make that on [0.1]^2, and the GL modes described are just various useful hacks to deal with that case.
I'm trying to write a 3D renderer for a scientific application which is based on vectors rather than pixels. The idea is to be able to output to vector formats, such as SVG, so I would like to keep everything as vector objects, rather than pixelizing. The objects will often have transparency.
At the moment I decompose everything into 3D triangles and line segments and split where there are overlaps. The scene is then projected and painted with depth-sorting (painter's algorithm). I'm sorting by the minimum depth of the triangle (secondarily sorting by the maximum for ties). This fails when you have long thin triangles behind bigger triangles which can rise in the order.
The scene is only drawn once for the same set of objects. I can't obviously use z-buffering because of the vectorization and transparency. Is there a robust and reasonably fast method of drawing the triangles in the correct z order? Typically there could be a few 1000 triangles.
Problem specification:
I have a rectangular and uniformly spaced image of pixels with vertex coordinates (i,j), (i+1,j), (i, j+1), (i+1, j+1) [i=0,...,m-1; j=0,...,n-1] and a polygon P with vertex coordinates (x_1,y_1), ..., (x_n, y_n). Now I want to efficiently compute the percentage of every pixel overlapping with P. P can be non-convex, or even self-intersection.
Essentially, this is a "soft" generalization of the scan-line rasterization algorithms which check efficiently if the pixel centers lie inside / outside the polygon.
I can think of the following approaches:
(1) Upsample the image (e.g. by a factor 10*10), count how many subpixel centers lie inside the polygon, and divide by 100. Problems: time efficiency, memory efficiency, accuracy.
(2) Use the scan-line algorithm on a slightly bigger and by (0.5,0.5) translated grid to compute the pixels that lie fully inside / outside, create a list of "borderline" pixels, walk counter-clockwise along the edges and compute the intersection areas with all pixels along the way. Problems: requires subtle coding, easy to introduce bugs.
My question: Has anybody already encountered this problem, and do you know a third, superior approach? And if not, have you made better experiences with (1) or with (2)? I assume that this problem may arise in the context of antialiasing?
Doing the exact geometric analysis might not be too difficult.
Deal with those pixels that are partially covered by the polygon first: you can use a technique from ray-tracing to quickly find all pixels that intersect with the polygon edges. You can then use the Cohen-Sutherland algorithm to efficiently find the points of intersection between the edge and the pixel, and hence you can compute the area of coverage for that pixel.
Note that you can avoid one of the two clipping operations involved in Cohen-Sutherland as adjacent pixels will share a segment intersection point. For instance - if you have two adjacent pixels, A and B that intersect with a segment p->q at points a1, a2, b1 and b2, then a2 and b1 will be the same. Passing the segment a2->q into the routine when clipping against B should avoid repeating work.
You'll have to treat the pixels that contain the polygon vertices specially, but again it shouldn't be too tricky: Cohen-Sutherland will help here as well.
Self-intersecting polygons will also throw up some special cases to handle - pixels that intersect with two or more edges. I can easily imagine that handling these exactly in all cases might get tricky, so I'd be tempted to just do the upsampling approach here.
Once these edge pixels have been identified, you can do the standard scan-line thing to fill in the polygon's interior pixels.
edit: Actually, now that I think more about it, you can totally skip the Cohen-Sutherland step. The algorithm in the linked paper can be easily extended to return the intersection points between the segment and the pixel grid. The segment will leave a given pixel at min( tMaxX, tMaxY ). Keep track of the last exit point to re-use as the entry point for the next pixel.
I would do
1a) Upsample when the pixel is partly overlapping:
but not the whole image, only the current pixel to be checked, or all pixels in the current scan line if that helps.
Than there is no memory argument.
speed? up to 16x16 i dont think that speed is an issue.
I'm looking for a way to programmatically recreate the following effect:
Give an input image:
input http://www.shiny.co.il/shooshx/ConeCarv/q_input.png
I want to iteratively apply the "stroke" effect.
The first step looks like this:
step 1 http://www.shiny.co.il/shooshx/ConeCarv/q_step1.png
The second step like this:
alt text http://www.shiny.co.il/shooshx/ConeCarv/q_step2.png
And so on.
I assume this will involves some kind of edge detection and then tracing the edge somehow.
Is there a known algorithm to do this in an efficient and robust way?
Basically, a custom algorithm would be, according to this thread:
Take the 3x3 neighborhood around a pixel, threshold the alpha channel, and then see if any of the 8 pixels around the pixel has a different alpha value from it. If so paint a
circle of a given radius with center at the pixel. To do inside/outside, modulate by the thresholded alpha channel (negate to do the other side). You'll have to threshold a larger neighborhood if the circle radius is larger than a pixel (which it probably is).
This is implemented using gray-scale morphological operations. This is also the same technique used to expand/contract selections. Basically, to stroke the center of a selection (or an alpha channel), what one would do is to first make two separate copies of the selection. The first selection would be expanded by the radius of the stroke, whereas the second would be contracted. The opacity of the stroke would then be obtained by subtracting the second selection from the first.
In order to do inside and outside strokes you would contract/expand by twice the radius and subtract the parts that intersect with the original selection.
It should be noted that the most general morphological algorithm requires O(m*n) operations, where m is the number of pixels of the image and n is the number of elements in the "structuring element". However, for certain special cases, this can be optimized to O(m) operations (e.g. if the structuring element is a rectangle or a diamond).