How to get the longest and the shorted edge of a room boundary? - revit-api

In the Revit API, I'm trying to get the longest and the shortest edge of a room boundary. (room is a rectangle)
For now, I have a list of 4 bounding edges of a room.(rb_curves) These are curves. I'm trying to sort this list by the length of each curve.
sorted_rb_curves = sorted(rb_curves, key=?)
I'm wondering what I can assign to the 'key' in order to sort.
Your help would be much appreciated!

An easy way to sort lists based on object attributes is using lambda. In your case it would be:
rb_curves.sort(key=lambda x: x.Length)
where Length is the attribute you are sorting by. Note this modifies your original list (as opposed to creating a new sorted list)
This would mean rb_curves[0] is the shortest Boundary, rb_curves[-1] is the longest.

Related

How do I merge two very close non-intersecting geometries?

Purple and Red are the input polygons/geometries.
Green is the result I want. (I don't have a proper definition for how the resulting geometry needs to be defined at the discontinuity).
I don't want to use a buffer operation (or polygon offsetting) as I think it might give me unexpected results when I expand the geometries and shrink them back.
What I tried:
I'm thinking of getting the pair of edges with shortest distance and check if the distance is less than a threshold.
Get the shortest edge in the pair and drawing lines from the vertices of this edge to intersect with the other geometry such that the length of the resulting line segments is the minimum.
I feel like this approach is inelegant(not a proper interpolation) and not optimal.
Need a proper way of interpolating the gap between the polygons in an optimized way.
I want to avoid using a buffer method (or polygon offsetting) unless I'm certain it is a proper interpolation.

Is there a way to have the same pattern across all the faces of an icosahedron?

This is the scenario: I have an icosahedron, therefore I have 12 vertices and 20 faces.
From the point of view of each vertex he is the center of an "extruded" pentagon, whose triangles are the faces of the icosahedron.
Let's say we want to name each of the vertices of each of these triangles from 1 to 3, always in a counterclockwise fashion, imagining that each vertex is not shared among different triangles.
(can't upload the image here for some reason sorry)
https://ibb.co/FmYfRG4
Is there a way to arrange the naming of the vertices inside each triangle so that every pentagon yields the same pattern of numbers along the five triangles?
As you can see by arranging the vertex names that way there would be the first pentagon with 1,1,1,1,1 but around it other pentagons couldn't have the same pattern.
EDIT: following Andrew Morton's comment I tried to write a possible sequence
I came up with two sequences of triangles: 1,2,3,1,3 for most pentagons, and 2,2,2,2,2 for the two caps.
I wonder if there is some additional optimization so that I only have one sequence instead of two, or maybe if there's is some mathematical demonstration that makes this impossible.

Quickest way to find closest set of point

I have three arrays of points:
A=[[5,2],[1,0],[5,1]]
B=[[3,3],[5,3],[1,1]]
C=[[4,2],[9,0],[0,0]]
I need the most efficient way to find the three points (one for each array) that are closest to each other (within one pixel in each axis).
What I'm doing right now is taking one point as reference, let's say A[0], and cycling all other B and C points looking for a solution. If A[0] gives me no result I'll move the reference to A[1] and so on. This approach as a huge problem because if I increase the number of points for each array and/or the number of arrays it requires too much time to converge some times, especially if the solution is in the last members of the arrays. So I'm wondering if there is any way to do this without maybe using a reference, or any quicker way than just looping all over the elements.
The rules that I must follow are the following:
the final solution has to be made by only one element from each array like: S=[A[n],B[m],C[j]]
each selected element has to be within 1 pixel in X and Y from ALL the other members of the solution (so Xi-Xj<=1 and Yi-Yj<=1 for each member of the solution).
For example in this simplified case the solution would be: S=[A[1],B[2],C[1]]
To clarify further the problem: what I wrote above it's just a simplify example to explain what I need. In my real case I don't know a priori the length of the lists nor the number of lists I have to work with, could be A,B,C, or A,B,C,D,E... (each of one with different number of points) etc. So I also need to find a way to make it as general as possible.
This requirement:
each selected element has to be within 1 pixel in X and Y from ALL the other members of the solution (so Xi-Xj<=1 and Yi-Yj<=1 for each member of the solution).
massively simplifies the problem, because it means that for any given (xi, yi), there are only nine possible choices of (xj, yj).
So I think the best approach is as follows:
Copy B and C into sets of tuples.
Iterate over A. For each point (xi, yi):
Iterate over the values of x from xi−1 to xi+1 and the values of y from yi−1 to yi+1. For each resulting point (xj, yj):
Check if (xj, yj) is in B. If so:
Iterate over the values of x from max(xi, xj)−1 to min(xi, xj)+1 and the values of y from max(yi, yj)−1 to min(yi, yj)+1. For each resulting point (xk, yk):
Check if (xk, yk) is in C. If so, we're done!
If we get to the end without having a match, that means there isn't one.
This requires roughly O(len(A) + len(B) + len(C)) time and O(len(B) + len(C) extra space.
Edited to add (due to a follow-up question in the comments): if you have N lists instead of just 3, then instead of nesting N loops deep (which gives time exponential in N), you'll want to do something more like this:
Copy B, C, etc., into sets of tuples, as above.
Iterate over A. For each point (xi, yi):
Create a set containing (xi, yi) and its eight neighbors.
For each of the lists B, C, etc.:
For each element in the set of nine points, see if it's in the current list.
Update the set to remove any points that aren't in the current list and don't have any neighbors in the current list.
If the set still has at least one element, then — great, each list contained a point that's within one pixel of that element (with all of those points also being within one pixel of each other). So, we're done!
If we get to the end without having a match, that means there isn't one.
which is much more complicated to implement, but is linear in N instead of exponential in N.
Currently, you are finding the solution with a bruteforce algorithm which has a O(n2) complexity. If your lists contains 1000 items, your algo will need 1000000 iterations to run... (It's even O(n3) as tobias_k pointed out)
Like you can see there: https://en.wikipedia.org/wiki/Closest_pair_of_points_problem, you could improve it by using a divide and conquer algorithm, which would run in a O(n log n) time.
You should search for Delaunay triangulation and/or Voronoi diagram implementations.
NB: if you can use external libs, you should also consider taking a look at the scipy lib: https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.spatial.Delaunay.html

visibility of objects in multi-dimensional space based on a specific perspective

I'm working on a data mining algorithm that considers features in their n-dimensional feature-space and allows surrounding training examples to block the 'visibility' of other training examples effectively taking them out of the effective training set for this particular query.
I've been trying to find an efficient way to determine which points are 'visible' to the query. I though the realm of computer graphics might offer some insight but there is a lot of information to peruse and much of it either can't be generalized to multiple dimensions or is only efficient when the number of dimensions is low.
I was hoping I could get some pointers from those of you who are more intimately knowledgeable in the domain.
The solution I found is to convert the euclidean coordinates into 'hyper-spherical' coordinates. Its similar to the spherical coordinate system except you add an additional angle with a range [0, pi) for each additional dimension beyond three.
After that I can sort the list of points based on their distance from the origin and iterate through comparing each point in the list to the first item looking for angles that overlap. after each iteration you remove the first item in the list and any items that were discovered to have been blocked. then start over with the new first item (closest item).
Don't know if anyone will ever find this useful but I thought I should put the answer up anyways.

Finding shared vertices among polygons

I have an interesting problem coming up soon and I've started to think about the algorithm. The more I think about it, the more I get frightened because I think it's going to scale horribly (O(n^4)), unless I can get smart. I'm having trouble getting smart about this one. Here's a simplified description of the problem.
I have N polygons (where N can be huge >10,000,000) that are stored as a list of M vertices (where M is on the order of 100). What I need to do is for each polygon create a list of any vertices that are shared among other polygons (Think of the polygons as surrounding regions of interest, sometimes the regions but up against each other). I see something like this
Polygon i | Vertex | Polygon j | Vertex
1 1 2 2
1 2 2 3
1 5 3 1
1 6 3 2
1 7 3 3
This mean that vertex 1 in polygon 1 is the same point as vertex 2 in polygon 2, and vertex 2 in polygon 1 is the same point as vertex 3 in polygon 2. Likewise vertex 5 in polygon 1 is the same as vertex 1 in polygon 3....
For simplicity, we can assume that polygons never overlap, the closest they get is touching at the edge, and that all the vertices are integers (to make the equality easy to test).
The only thing I can thing of right now is for each polygon I have to loop over all of the polygons and vertices giving me a scaling of O(N^2*M^2) which is going to be very bad in my case. I can have very large files of polygons, so I can't even store it all in RAM, so that would mean multiple reads of the file.
Here's my pseudocode so far
for i=1 to N
Pi=Polygon(i)
for j = i+1 to N
Pj=Polygon(j)
for ii=1 to Pi.VertexCount()
Vi=Pi.Vertex(ii)
for jj=1 to Pj.VertexCount()
Vj=Pj.Vertex(jj)
if (Vi==Vj) AddToList(i,ii,j,jj)
end for
end for
end for
end for
I'm assuming that this has come up in the graphics community (I don't spend much time there, so I don't know the literature). Any Ideas?
This is a classic iteration-vs-memory problem. If you're comparing every polygon with every other polygon, you'll run into a O(n^2) solution. If you build a table as you step through all the polygons, then march through the table afterwards, you get a nice O(2n) solution. I ask a similar question during interviews.
Assuming you have the memory available, you want to create a multimap (one key, multiple entries) with each vertex as the key, and the polygon as the entry. Then you can walk each polygon exactly once, inserting the vertex and polygon into the map. If the vertex already exists, you add the polygon as an additional entry to that vertex key.
Once you've hit all the polygons, you walk the entire map once and do whatever you need to do with any vertex that has more than one polygon entry.
If you have the polygon/face data you don't even need to look at the vertices.
Create an array from [0..M] (where M is the number of verts)
iterate over the polygons and increment the array entry of each vertex index.
This gives you an array that describes how many times each vertex is used.*
You can then do another pass over the polygons and check the entry for each vertex. If it's > 1 you know that vertex is shared by another polygon.
You can build upon this strategy further if you need to store/find other information. For example instead of a count you could store polygons directly in the array allowing you to get a list of all faces that use a given vertex index. At this point you're effectively creating a map where vertex indices are the key.
(*this example assumes you have no degenerate polygons, but those could easily be handled).
Well, one simple optimization would be to make a map (hashtable, probably) that maps each distinct vertex (identified by its coordinates) to a list of all polygons of which it is a part. That cuts down your runtime to something like O(NM) - still large but I have my doubts that you could do better, since I can't imagine any way to avoid examining all the vertices.

Resources