Revit API. How can I get bounding box for several elements? - geometry

I need to find an outline for many elements (>100'000 items). Target elements come from a FilteredElementCollector. As usual I'm looking for the fastest possible way.
For now I tried to iterate over all elements to get its BoudingBox.Min and BoudingBox.Max and find out minX, minY, minZ, maxX, maxY, maxZ. It works pretty accurate but takes too much time.
The problem is described above is a part of a bigger one.
I need to find all the intersections of ducts, pipes and other curve-based elements from a link model with walls, ceilings, columns, etc. in the general model and then place openings in a intersection.
I tried to use a combination of ElementIntersectElement filter and IntersectSolidAndCurve method to find a part of curve inside element.
First with an ElementIntersectElement I tried to reduce a collection for further use of IntersectSolidAndCurve .IntersectSolidAndCurve takes two argument: solid and curve and have to work in two nested one in the other loops. So, it takes for 54000 walls (after filtering) and 18000 pipes in my case 972'000'000 operation.
With the number of operations 10 ^ 5, the algorithm shows an acceptable time.
I decided to reduce the number of elements by dividing the search areas by levels. This works well for high-rise buildings, but still bad for extended low structures. I decided to divide the building by length, but I did not find a method that finds boundaries for several elements (the whole building).
I seem to go in a wrong way. Is there are right way to make it with revit api instrument

To find boundaries we can take advantage of the binary search idea.
The difference from the classic binary search algorithm is there is no an array, and we should find two number instead of a one.
Elements in Geometry space could be presented as a 3-dimensional sorted array of XYZ points.
Revit api provides excellent Quick Filter: BoundingBoxIntersectsFilter that takes an instance of an Outline
So, let’s define an area that includes all the elements for which we want to find the boundaries. For my case, for example 500 meters, and create min and max point for the initial outline
double b = 500000 / 304.8;
XYZ min = new XYZ(-b, -b, -b);
XYZ max = new XYZ(b, b, b);
Below is an implementation for one direction, however, you can easily use it for three directions by calling and feeding the result of the previous iteration to the input
double precision = 10e-6 / 304.8;
var bb = new BinaryUpperLowerBoundsSearch(doc, precision);
XYZ[] rx = bb.GetBoundaries(min, max, elems, BinaryUpperLowerBoundsSearch.Direction.X);
rx = bb.GetBoundaries(rx[0], rx[1], elems, BinaryUpperLowerBoundsSearch.Direction.Y);
rx = bb.GetBoundaries(rx[0], rx[1], elems, BinaryUpperLowerBoundsSearch.Direction.Z);
The GetBoundaries method returns two XYZ points: lower and upper, which change only in the target direction, the other two dimensions remain unchanged
public class BinaryUpperLowerBoundsSearch
{
private Document doc;
private double tolerance;
private XYZ min;
private XYZ max;
private XYZ direction;
public BinaryUpperLowerBoundsSearch(Document document, double precision)
{
doc = document;
this.tolerance = precision;
}
public enum Direction
{
X,
Y,
Z
}
/// <summary>
/// Searches for an area that completely includes all elements within a given precision.
/// The minimum and maximum points are used for the initial assessment.
/// The outline must contain all elements.
/// </summary>
/// <param name="minPoint">The minimum point of the BoundBox used for the first approximation.</param>
/// <param name="maxPoint">The maximum point of the BoundBox used for the first approximation.</param>
/// <param name="elements">Set of elements</param>
/// <param name="axe">The direction along which the boundaries will be searched</param>
/// <returns>Returns two points: first is the lower bound, second is the upper bound</returns>
public XYZ[] GetBoundaries(XYZ minPoint, XYZ maxPoint, ICollection<ElementId> elements, Direction axe)
{
// Since Outline is not derived from an Element class there
// is no possibility to apply transformation, so
// we have use as a possible directions only three vectors of basis
switch (axe)
{
case Direction.X:
direction = XYZ.BasisX;
break;
case Direction.Y:
direction = XYZ.BasisY;
break;
case Direction.Z:
direction = XYZ.BasisZ;
break;
default:
break;
}
// Get the lower and upper bounds as a projection on a direction vector
// Projection is an extention method
double lowerBound = minPoint.Projection(direction);
double upperBound = maxPoint.Projection(direction);
// Set the boundary points in the plane perpendicular to the direction vector.
// These points are needed to create BoundingBoxIntersectsFilter when IsContainsElements calls.
min = minPoint - lowerBound * direction;
max = maxPoint - upperBound * direction;
double[] res = UpperLower(lowerBound, upperBound, elements);
return new XYZ[2]
{
res[0] * direction + min,
res[1] * direction + max,
};
}
/// <summary>
/// Check if there are any elements contains in the segment [lower, upper]
/// </summary>
/// <returns>True if any elements are in the segment</returns>
private ICollection<ElementId> IsContainsElements(double lower, double upper, ICollection<ElementId> ids)
{
var outline = new Outline(min + direction * lower, max + direction * upper);
return new FilteredElementCollector(doc, ids)
.WhereElementIsNotElementType()
.WherePasses(new BoundingBoxIntersectsFilter(outline))
.ToElementIds();
}
private double[] UpperLower(double lower, double upper, ICollection<ElementId> ids)
{
// Get the Midpoint for segment mid = lower + 0.5 * (upper - lower)
var mid = Midpoint(lower, upper);
// Сheck if the first segment contains elements
ICollection<ElementId> idsFirst = IsContainsElements(lower, mid, ids);
bool first = idsFirst.Any();
// Сheck if the second segment contains elements
ICollection<ElementId> idsSecond = IsContainsElements(mid, upper, ids);
bool second = idsSecond.Any();
// If elements are in both segments
// then the first segment contains the lower border
// and the second contains the upper
// ---------**|***--------
if (first && second)
{
return new double[2]
{
Lower(lower, mid, idsFirst),
Upper(mid, upper, idsSecond),
};
}
// If elements are only in the first segment it contains both borders.
// We recursively call the method UpperLower until
// the lower border turn out in the first segment and
// the upper border is in the second
// ---*****---|-----------
else if (first && !second)
return UpperLower(lower, mid, idsFirst);
// Do the same with the second segment
// -----------|---*****---
else if (!first && second)
return UpperLower(mid, upper, idsSecond);
// Elements are out of the segment
// ** -----------|----------- **
else
throw new ArgumentException("Segment is not contains elements. Try to make initial boundaries wider", "lower, upper");
}
/// <summary>
/// Search the lower boundary of a segment containing elements
/// </summary>
/// <returns>Lower boundary</returns>
private double Lower(double lower, double upper, ICollection<ElementId> ids)
{
// If the boundaries are within tolerance return lower bound
if (IsInTolerance(lower, upper))
return lower;
// Get the Midpoint for segment mid = lower + 0.5 * (upper - lower)
var mid = Midpoint(lower, upper);
// Сheck if the segment contains elements
ICollection<ElementId> idsFirst = IsContainsElements(lower, mid, ids);
bool first = idsFirst.Any();
// ---*****---|-----------
if (first)
return Lower(lower, mid, idsFirst);
// -----------|-----***---
else
return Lower(mid, upper, ids);
}
/// <summary>
/// Search the upper boundary of a segment containing elements
/// </summary>
/// <returns>Upper boundary</returns>
private double Upper(double lower, double upper, ICollection<ElementId> ids)
{
// If the boundaries are within tolerance return upper bound
if (IsInTolerance(lower, upper))
return upper;
// Get the Midpoint for segment mid = lower + 0.5 * (upper - lower)
var mid = Midpoint(lower, upper);
// Сheck if the segment contains elements
ICollection<ElementId> idsSecond = IsContainsElements(mid, upper, ids);
bool second = idsSecond.Any();
// -----------|----*****--
if (second)
return Upper(mid, upper, idsSecond);
// ---*****---|-----------
else
return Upper(lower, mid, ids);
}
private double Midpoint(double lower, double upper) => lower + 0.5 * (upper - lower);
private bool IsInTolerance(double lower, double upper) => upper - lower <= tolerance;
}
Projection is an extention method for vector to determine the length of projection one vector for another
public static class PointExt
{
public static double Projection(this XYZ vector, XYZ other) =>
vector.DotProduct(other) / other.GetLength();
}

In principle, what you describe is the proper approach and the only way to do it. However, there may be many possibilities to optimise your code. The Building Coder provides various utility functions that may help. For instance, to determine the bounding box of an entire family. Many more in The Building Coder samples Util module. Search there for "bounding box". I am sure they can be further optimised as well for your case. For instance, you may be able to extract all the X coordinates from all the individual elements' bounding box Max values and use a generic Max function to determine their maximum in one single call instead of comparing them one by one. Benchmark your code to discover optimisation possibilities and analyse their effect on the performance. Please do share your final results for others to learn from. Thank you!

Related

Finding the intersection(s) between two angle ranges / segments

We have two angle ranges, (aStart, aSweep) and (bStart, bSweep), where the start is the place of the start of the angle segment in the range [0, 2π), and sweep is the size of the segment, in the range (0, 2π].
We want to find all of the angle ranges where these two angle ranges overlap, if there are any.
We need a solution that covers at least three kinds of situations:
But the number of cases increases as we confront the reality of the Devil Line that exists at angle = 0, which messes up all of the inequalities whenever either of the angle ranges cross it.
This solution works by normalising the angles to said Devil Line, so that one of the angles (which we call the origin angle) always starts there. It turns out that this makes the rest of the procedure extremely simple.
const float TPI = 2*M_PI;
//aStart and bStart must be in [0, 2PI)
//aSweep and bSweep must be in (0, 2PI]
//forInterval(float start, float sweep) gets called on each intersection found. It is possible for there to be zero, one, or two, you see, so it's not obvious how we would want to return an answer. We leave it abstract.
//only reports overlaps, not contacts (IE, it shouldn't report any overlaps of zero span)
template<typename F>
void overlappingSectors(float aStart, float aSweep, float bStart, float bSweep, F forInterval){
//we find the lower angle and work relative to it
float greaterAngle;
float greaterSweep;
float originAngle;
float originSweep;
if(aStart < bStart){
originAngle = aStart;
originSweep = aSweep;
greaterSweep = bSweep;
greaterAngle = bStart;
}else{
originAngle = bStart;
originSweep = bSweep;
greaterSweep = aSweep;
greaterAngle = aStart;
}
float greaterAngleRel = greaterAngle - originAngle;
if(greaterAngleRel < originSweep){
forInterval(greaterAngle, min(greaterSweep, originSweep - greaterAngleRel));
}
float rouno = greaterAngleRel + greaterSweep;
if(rouno > TPI){
forInterval(originAngle, min(rouno - TPI, originSweep));
}
}

Custom filter bank is not generating the expected output

Please, refer to this article.
I have implemented the section 4.1 (Pre-processing).
The preprocessing step aims to enhance image features along a set of
chosen directions. First, image is grey-scaled and filtered with a
sharpening filter (we subtract from the image its local-mean filtered
version), thus eliminating the DC component.
We selected 12 not overlapping filters, to analyze 12 different
directions, rotated with respect to 15° each other.
GitHub Repositiry is here.
Since, the given formula in the article is incorrect, I have tried two sets of different formulas.
The first set of formula,
The second set of formula,
The expected output should be,
Neither of them are giving proper results.
Can anyone suggest me any modification?
GitHub Repository is here.
Most relevalt part of the source code is here:
public List<Bitmap> Apply(Bitmap bitmap)
{
Kernels = new List<KassWitkinKernel>();
double degrees = FilterAngle;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
kernel = new KassWitkinKernel();
kernel.Width = KernelDimension;
kernel.Height = KernelDimension;
kernel.CenterX = (kernel.Width) / 2;
kernel.CenterY = (kernel.Height) / 2;
kernel.Du = 2;
kernel.Dv = 2;
kernel.ThetaInRadian = Tools.DegreeToRadian(degrees);
kernel.Compute();
//SleuthEye
kernel.Pad(kernel.Width, kernel.Height, WidthWithPadding, HeightWithPadding);
Kernels.Add(kernel);
degrees += degrees;
}
List<Bitmap> list = new List<Bitmap>();
Bitmap image = (Bitmap)bitmap.Clone();
//PictureBoxForm f = new PictureBoxForm(image);
//f.ShowDialog();
Complex[,] cImagePadded = ImageDataConverter.ToComplex(image);
Complex[,] fftImage = FourierTransform.ForwardFFT(cImagePadded);
foreach (KassWitkinKernel k in Kernels)
{
Complex[,] cKernelPadded = k.ToComplexPadded();
Complex[,] convolved = Convolution.ConvolveInFrequencyDomain(fftImage, cKernelPadded);
Bitmap temp = ImageDataConverter.ToBitmap(convolved);
list.Add(temp);
}
return list;
}
Perhaps the first thing that should be mentioned is that the filters should be generated with angles which should increase in FilterAngle (in your case 15 degrees) increments. This can be accomplished by modifying KassWitkinFilterBank.Apply as follow (see this commit):
public List<Bitmap> Apply(Bitmap bitmap)
{
// ...
// The generated template filter from the equations gives a line at 45 degrees.
// To get the filter to highlight lines starting with an angle of 90 degrees
// we should start with an additional 45 degrees offset.
double degrees = 45;
KassWitkinKernel kernel;
for (int i = 0; i < NoOfFilters; i++)
{
// ... setup filter (unchanged)
// Now increment the angle by FilterAngle
// (not "+= degrees" which doubles the value at each step)
degrees += FilterAngle;
}
This should give you the following result:
It is not quite the result from the paper and the differences between the images are still quite subtle, but you should be able to notice that the scratch line is most intense in the 8th figure (as would be expected since the scratch angle is approximately 100-105 degrees).
To improve the result, we should feed the filters with a pre-processed image in the same way as described in the paper:
First, image is grey-scaled and filtered with a sharpening filter (we subtract from the image its local-mean filtered version), thus eliminating the DC component
When you do so, you will get a matrix of values, some of which will be negative. As a result this intermediate processing result is not suitable to be stored as a Bitmap. As a general rule when performing image processing, you should keep all intermediate results in double or Complex as appropriate, and only convert back the final result to Bitmap for visualization.
Integrating your changes to add image sharpening from your GitHub repository while keeping intermediate results as doubles can be achieve by changing the input bitmap and temporary image variables to use double[,] datatype instead of Bitmap in the KassWitkinFilterBank.Apply method (see this commit):
public List<Bitmap> Apply(double[,] bitmap)
{
// [...]
double[,] image = (double[,])bitmap.Clone();
// [...]
}
which should give you the following result:
Or to better highlight the difference, here is figure 1 (0 degrees) on the left, next to figure 8 (105 degrees) on the right:

Bayes' formula for updating probabilistic map

I'm trying to get a mobile robot to map an arena based on what it can see from a camera. I've created a map, and managed to get the robot to identify items placed in the arena and give an estimated location, however, as I'm only using an RGB camera the resulting numbers can vary slightly ever frame due to noise, or change in lighting, etc. What am now trying to do is create a probability map using Bayes' formula to give a better map of the arena.
Bayes' Formula
P(i | x) = (p(i)p(x|i))/(sum(p(j)(p(x|j))
This is what I've got so far. All points on the map are initialised to 0.5.
// Gets the Likely hood of the event being correct
// Para 1 = Is the object likely to be at that location
// Para 2 = is the sensor saying it's at that location
private double getProbabilityNum(bool world, bool sensor)
{
if (world && sensor)
{
// number to test the function works
return 0.6;
}
else if (world && !sensor)
{
// number to test the function works
return 0.4;
}
else if (!world && sensor)
{
// number to test the function works
return 0.2;
}
else //if (!world && !sensor)
{
// number to test the function works
return 0.8;
}
}
// A function to update the map's probability of an object being at location (x,y)
// Para 3 = does the sensor pick up the an object at (x,y)
public double probabilisticMap(int x,int y,bool sensor)
{
// gets the current likelihood from the map (prior Probability)
double mapProb = get(x,y);
//decide if object is at location (x,y)
bool world = (mapProb < threshold);
//Bayes' formula to update the probability
double newProb =
(getProbabilityNum(world, sensor) * mapProb) / ((getProbabilityNum(world, sensor) * mapProb) + (getProbabilityNum(!world, sensor) * (1 - mapProb)));
// update the location on the map
set(x,y,newProb);
// return the probability as well
return newProb;
}
It does work, but the numbers seem to jump rapidly, and then flicker when they are at the top, it also errors if the numbers drop too near to zero. Anyone have any idea why this might be happening? I think it's something to do with the way the equations is coded, but I'm not too sure. (I found this, but I don't quite understand it, so I'm not sure of it's relevents, but it seems to be talking about the same thing
Thanks in Advance.
Use log-likelihoods when doing numerical computations involving probabilities.
Consider
P(i | x) = (p(i)p(x|i))/(sum(p(j)(p(x|j)).
Because x is fixed, the denominator, p(x), is a constant. Thus
P(i | x) ~ p(i)p(x|i)
where ~ denotes "is proportional to."
The log-likelihood function is just the log of this. That is,
L(i | x) = log(p(i)) + log(p(x|i)).

How to compute the visible area based on a heightmap?

I have a heightmap. I want to efficiently compute which tiles in it are visible from an eye at any given location and height.
This paper suggests that heightmaps outperform turning the terrain into some kind of mesh, but they sample the grid using Bresenhams.
If I were to adopt that, I'd have to do a line-of-sight Bresenham's line for each and every tile on the map. It occurs to me that it ought to be possible to reuse most of the calculations and compute the heightmap in a single pass if you fill outwards away from the eye - a scanline fill kind of approach perhaps?
But the logic escapes me. What would the logic be?
Here is a heightmap with a the visibility from a particular vantagepoint (green cube) ("viewshed" as in "watershed"?) painted over it:
Here is the O(n) sweep that I came up with; I seems the same as that given in the paper in the answer below How to compute the visible area based on a heightmap? Franklin and Ray's method, only in this case I am walking from eye outwards instead of walking the perimeter doing a bresenhams towards the centre; to my mind, my approach would have much better caching behaviour - i.e. be faster - and use less memory since it doesn't have to track the vector for each tile, only remember a scanline's worth:
typedef std::vector<float> visbuf_t;
inline void map::_visibility_scan(const visbuf_t& in,visbuf_t& out,const vec_t& eye,int start_x,int stop_x,int y,int prev_y) {
const int xdir = (start_x < stop_x)? 1: -1;
for(int x=start_x; x!=stop_x; x+=xdir) {
const int x_diff = abs(eye.x-x), y_diff = abs(eye.z-y);
const bool horiz = (x_diff >= y_diff);
const int x_step = horiz? 1: x_diff/y_diff;
const int in_x = x-x_step*xdir; // where in the in buffer would we get the inner value?
const float outer_d = vec2_t(x,y).distance(vec2_t(eye.x,eye.z));
const float inner_d = vec2_t(in_x,horiz? y: prev_y).distance(vec2_t(eye.x,eye.z));
const float inner = (horiz? out: in).at(in_x)*(outer_d/inner_d); // get the inner value, scaling by distance
const float outer = height_at(x,y)-eye.y; // height we are at right now in the map, eye-relative
if(inner <= outer) {
out.at(x) = outer;
vis.at(y*width+x) = VISIBLE;
} else {
out.at(x) = inner;
vis.at(y*width+x) = NOT_VISIBLE;
}
}
}
void map::visibility_add(const vec_t& eye) {
const float BASE = -10000; // represents a downward vector that would always be visible
visbuf_t scan_0, scan_out, scan_in;
scan_0.resize(width);
vis[eye.z*width+eye.x-1] = vis[eye.z*width+eye.x] = vis[eye.z*width+eye.x+1] = VISIBLE;
scan_0.at(eye.x) = BASE;
scan_0.at(eye.x-1) = BASE;
scan_0.at(eye.x+1) = BASE;
_visibility_scan(scan_0,scan_0,eye,eye.x+2,width,eye.z,eye.z);
_visibility_scan(scan_0,scan_0,eye,eye.x-2,-1,eye.z,eye.z);
scan_out = scan_0;
for(int y=eye.z+1; y<height; y++) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y-1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y-1);
}
scan_out = scan_0;
for(int y=eye.z-1; y>=0; y--) {
scan_in = scan_out;
_visibility_scan(scan_in,scan_out,eye,eye.x,-1,y,y+1);
_visibility_scan(scan_in,scan_out,eye,eye.x,width,y,y+1);
}
}
Is it a valid approach?
it is using centre-points rather than looking at the slope between the 'inner' pixel and its neighbour on the side that the LoS passes
could the trig in to scale the vectors and such be replaced by factor multiplication?
it could use an array of bytes since the heights are themselves bytes
its not a radial sweep, its doing a whole scanline at a time but away from the point; it only uses only a couple of scanlines-worth of additional memory which is neat
if it works, you could imagine that you could distribute it nicely using a radial sweep of blocks; you have to compute the centre-most tile first, but then you can distribute all immediately adjacent tiles from that (they just need to be given the edge-most intermediate values) and then in turn more and more parallelism.
So how to most efficiently calculate this viewshed?
What you want is called a sweep algorithm. Basically you cast rays (Bresenham's) to each of the perimeter cells, but keep track of the horizon as you go and mark any cells you pass on the way as being visible or invisible (and update the ray's horizon if visible). This gets you down from the O(n^3) of the naive approach (testing each cell of an nxn DEM individually) to O(n^2).
More detailed description of the algorithm in section 5.1 of this paper (which you might also find interesting for other reasons if you aspire to work with really enormous heightmaps).

Dirty Rectangles

Where may one find references on implementing an algorithm for calculating a "dirty rectangle" for minimizing frame buffer updates? A display model that permits arbitrary edits and computes the minimal set of "bit blit" operations required to update the display.
To build the smallest rectangle that contains all the areas that need to be repainted:
Start with a blank area (perhaps a rectangle set to 0,0,0,0 - something you can detect as 'no update required')
For each dirty area added:
Normalize the new area (i.e. ensure that left is less than right, top less than bottom)
If the dirty rectangle is currently empty, set it to the supplied area
Otherwise, set the left and top co-ordinates of the dirty rectangle to the smallest of {dirty,new}, and the right and bottom co-ordinates to the largest of {dirty,new}.
Windows, at least, maintains an update region of the changes that it's been informed of, and any repainting that needs to be done due to the window being obscured and revealed. A region is an object that is made up of many possibly discontinuous rectangles, polygons and ellipses. You tell Windows about a part of the screen that needs to be repainted by calling InvalidateRect - there is also an InvalidateRgn function for more complicated areas. If you choose to do some painting before the next WM_PAINT message arrives, and you want to exclude that from the dirty area, there are ValidateRect and ValidateRgn functions.
When you start painting with BeginPaint, you supply a PAINTSTRUCT that Windows fills with information about what needs to be painted. One of the members is the smallest rectangle that contains the invalid region. You can get the region itself using GetUpdateRgn (you must call this before BeginPaint, because BeginPaint marks the whole window as valid) if you want to minimize drawing when there are multiple small invalid areas.
I would assume that, as minimizing drawing was important on the Mac and on X when those environments were originally written, there are equivalent mechanisms for maintaining an update region.
Vexi is a reference implementation of this. The class is org.vexi.util.DirtyList (Apache License), and is used as part of production systems i.e. thoroughly tested, and is well commented.
A caveat, the currently class description is a bit inaccurate, "A general-purpose data structure for holding a list of rectangular regions that need to be repainted, with intelligent coalescing." Actually it does not currently do the coalescing. Therefore you can consider this a basic DirtyList implementation in that it only intersects dirty() requests to make sure there are no overlapping dirty regions.
The one nuance to this implementation is that, instead of using Rect or another similar region object, the regions are stored in an array of ints i.e. in blocks of 4 ints in a 1-dimensional array. This is done for run time efficiency although in retrospect I'm not sure whether there's much merit to this. (Yes, I implemented it.) It should be simple enough to substitute Rect for the array blocks in use.
The purpose of the class is to be fast. With Vexi, dirty may be called thousands of times per frame, so intersections of the dirty regions with the dirty request has to be as quick as possible. No more than 4 number comparisons are used to determine the relative position of two regions.
It is not entirely optimal due to the missing coalescing. Whilst it does ensure no overlaps between dirty/painted regions, you might end up with regions that line up and could be merged into a larger region - and therefore reducing the number of paint calls.
Code snippet. Full code online here.
public class DirtyList {
/** The dirty regions (each one is an int[4]). */
private int[] dirties = new int[10 * 4]; // gets grown dynamically
/** The number of dirty regions */
private int numdirties = 0;
...
/**
* Pseudonym for running a new dirty() request against the entire dirties list
* (x,y) represents the topleft coordinate and (w,h) the bottomright coordinate
*/
public final void dirty(int x, int y, int w, int h) { dirty(x, y, w, h, 0); }
/**
* Add a new rectangle to the dirty list; returns false if the
* region fell completely within an existing rectangle or set of
* rectangles (i.e. did not expand the dirty area)
*/
private void dirty(int x, int y, int w, int h, int ind) {
int _n;
if (w<x || h<y) {
return;
}
for (int i=ind; i<numdirties; i++) {
_n = 4*i;
// invalid dirties are marked with x=-1
if (dirties[_n]<0) {
continue;
}
int _x = dirties[_n];
int _y = dirties[_n+1];
int _w = dirties[_n+2];
int _h = dirties[_n+3];
if (x >= _w || y >= _h || w <= _x || h <= _y) {
// new region is outside of existing region
continue;
}
if (x < _x) {
// new region starts to the left of existing region
if (y < _y) {
// new region overlaps at least the top-left corner of existing region
if (w > _w) {
// new region overlaps entire width of existing region
if (h > _h) {
// new region contains existing region
dirties[_n] = -1;
continue;
}// else {
// new region contains top of existing region
dirties[_n+1] = h;
continue;
} else {
// new region overlaps to the left of existing region
if (h > _h) {
// new region contains left of existing region
dirties[_n] = w;
continue;
}// else {
// new region overlaps top-left corner of existing region
dirty(x, y, w, _y, i+1);
dirty(x, _y, _x, h, i+1);
return;
}
} else {
// new region starts within the vertical range of existing region
if (w > _w) {
// new region horizontally overlaps existing region
if (h > _h) {
// new region contains bottom of existing region
dirties[_n+3] = y;
continue;
}// else {
// new region overlaps to the left and right of existing region
dirty(x, y, _x, h, i+1);
dirty(_w, y, w, h, i+1);
return;
} else {
// new region ends within horizontal range of existing region
if (h > _h) {
// new region overlaps bottom-left corner of existing region
dirty(x, y, _x, h, i+1);
dirty(_x, _h, w, h, i+1);
return;
}// else {
// existing region contains right part of new region
w = _x;
continue;
}
}
} else {
// new region starts within the horizontal range of existing region
if (y < _y) {
// new region starts above existing region
if (w > _w) {
// new region overlaps at least top-right of existing region
if (h > _h) {
// new region contains the right of existing region
dirties[_n+2] = x;
continue;
}// else {
// new region overlaps top-right of existing region
dirty(x, y, w, _y, i+1);
dirty(_w, _y, w, h, i+1);
return;
} else {
// new region is horizontally contained within existing region
if (h > _h) {
// new region overlaps to the above and below of existing region
dirty(x, y, w, _y, i+1);
dirty(x, _h, w, h, i+1);
return;
}// else {
// existing region contains bottom part of new region
h = _y;
continue;
}
} else {
// new region starts within existing region
if (w > _w) {
// new region overlaps at least to the right of existing region
if (h > _h) {
// new region overlaps bottom-right corner of existing region
dirty(x, _h, w, h, i+1);
dirty(_w, y, w, _h, i+1);
return;
}// else {
// existing region contains left part of new region
x = _w;
continue;
} else {
// new region is horizontally contained within existing region
if (h > _h) {
// existing region contains top part of new region
y = _h;
continue;
}// else {
// new region is contained within existing region
return;
}
}
}
}
// region is valid; store it for rendering
_n = numdirties*4;
size(_n);
dirties[_n] = x;
dirties[_n+1] = y;
dirties[_n+2] = w;
dirties[_n+3] = h;
numdirties++;
}
...
}
It sounds like what you need is a bounding box for each shape that you're rendering to the screen. Remember that a bounding box of a polygon can be defined as a "lower left" (the minimum point) and an "upper right" (the maximum point). That is, the x-component of the minimum point is defined as the minimum of all the x-components of each point in a polygon. Use the same methodology for the y-component (in the case of 2D) and the maximal point of the bounding box.
If it's sufficient to have a bounding box (aka "dirty rectangle") per polygon, you're done. If you need an overall composite bounding box, the same algorithm applies, except you can just populate a single box with minimal and maximal points.
Now, if you're doing all this in Java, you can get your bounding box for an Area (which you can construct from any Shape) directly by using the getBound2D() method.
What language are you using? In Python, Pygame can do this for you. Use the RenderUpdates Group and some Sprite objects with image and rect attributes.
For example:
#!/usr/bin/env python
import pygame
class DirtyRectSprite(pygame.sprite.Sprite):
"""Sprite with image and rect attributes."""
def __init__(self, some_image, *groups):
pygame.sprite.Sprite.__init__(self, *groups)
self.image = pygame.image.load(some_image).convert()
self.rect = self.image.get_rect()
def update(self):
pass #do something here
def main():
screen = pygame.display.set_mode((640, 480))
background = pygame.image.load(open("some_bg_image.png")).convert()
render_group = pygame.sprite.RenderUpdates()
dirty_rect_sprite = DirtyRectSprite(open("some_image.png"))
render_group.add(dirty_rect_sprite)
while True:
dirty_rect_sprite.update()
render_group.clear(screen, background)
pygame.display.update(render_group.draw(screen))
If you're not using Python+Pygame, here's what I would do:
Make a Sprite class that's update(),
move() etc. method sets a "dirty"
flag.
Keep a rect for each sprite
If your API supports updating a list of rects, use that on the list of rects whose sprites are dirty. In SDL, this is SDL_UpdateRects.
If your API doesn't support updating a list of rects (I've never had the chance to use anything besides SDL so I wouldn't know), test to see if it's quicker to call the blit function multiple times or once with a big rect. I doubt that any API would be faster using one big rect, but again, I haven't used anything besides SDL.
I just recently wrote a Delphi class to calculate the difference rectangles of two images and was quite suprised by how fast it ran - fast enough to run in a short timer and after mouse/keyboard messages for recording screen activity.
The step by step gist of how it works is by:
Sub-dividing the image into logical 12x12 by rectangles.
Looping through each pixel and if there's a difference then I tell the sub-rectangle which the pixel belongs to that there's a difference in one of it's pixels and where.
Each sub-rectangle remembers the co-ordinates of it's own left-most, top-most, right-most and bottom-most difference.
Once all the differences have been found, I loop through all the sub-rectangles that have differences and form bigger rectangles out of them if they are next to each other and use the left-most, top-most, right-most and bottom-most differences of those sub-rectangles to make actual difference rectangles I use.
This seems to work quite well for me. If you haven't already implemented your own solution, let me know and I'll email you my code if you like. Also as of now, I'm a new user of StackOverflow so if you appreciate my answer then please vote it up. :)
Look into R-tree and quadtree data structures.

Resources