REVIT API Filtering Elements by Edges - revit-api

I have to retrieve the endpoints from every edge in the doc, but the process of retrieving the edges is taking too much time due to having to iterate through every element.
My current approach is
FilteredElementCollector collector = new FilteredElementCollector(doc);
collector.WherePasses(new LogicalOrFilter((new ElementIsElementTypeFilter(false)), new ElementIsElementTypeFilter(true)));
List<object> coordinatelist = new List<object>();
for (int i = collector.ToElements().Count - 1; i > 0; i--)
{
Element element = collector.ToElements()[i];
GeometryElement geo = element.get_Geometry(new Options());
if (geo != null)
{
for(int j = geo.Count()-1;j>=0;j--){
Solid geosolid = geo.ElementAt(j) as Solid;
if(geosolid != null)
{
for(int k = geosolid.Edges.Size - 1; k >= 0; k--)
{
Edge edge = geosolid.Edges.get_Item(k);
Curve edgecurve = edge.AsCurve();
FillDictionary(edgecurve, element);
}
}
else continue;
}
}
else continue;
}
I am unable to filter by edges since Edge is not a child of Element but of GeometryObject
How can I get the edges without iterating through every element or how can I speed up the process?

You can eliminate a lot of elements from your iteration.
Why do you want to iterate over elements fulfilling ElementIsElementTypeFilter( true )?
They do not exist in the project; they are just templates, types, symbols. Only the instances exist in the project model space.
Furthermore, you are calling ToElements within the loop, on each single iteration. This is creating a new collection of all the elements each time. That is a huge waste of time and space.
There is no need to call ToElements at all! Check out this discussion of FindElement and Collector Optimisation.
You can probably eliminate many other elements as well. For instance, your elements of interest will presumably almost certainly have a valid category.
The Building Coder explored several different approaches to Retrieve Model Elements or Visible 3D Elements.
You could add the check for non-void solid and the extraction of the solids into the LINQ clause, if you want, to make your code shorter and more readable; however, that will probably not affect performance much.
Something like this?
void RetrieveEdges(
Document doc,
Dictionary<Curve, ElementId> curves )
{
FilteredElementCollector collector
= new FilteredElementCollector( doc )
.WhereElementIsNotElementType()
.WhereElementIsViewIndependent();
Options opt = new Options();
foreach( Element el in collector )
{
if( null != el.Category )
{
GeometryElement geo = el.get_Geometry( opt );
if( geo != null )
{
foreach( GeometryObject obj in geo )
{
Solid sol = obj as Solid;
if( null!= sol )
{
foreach( Edge edge in sol.Edges )
{
Curve edgecurve = edge.AsCurve();
curves.Add( edgecurve, el.Id );
}
}
}
}
}
}
}

In case you really need all geometric elements, one approach to avoid checking them one by one is to implement a custom exporter. That will give you all geometry of all visible elements in a 3D view with zero hassle. If all you need are walls, set up a suitable 3D view that displays just those walls.

Related

What is the Best way to creat multiple object Autocad?

I am learning about ObjectArx and as far as I know there are 3 common ways to create objects in Arx:
use acdbEntMake
use record.append (entity)
use a combination of record.append and transaction
so, my questions is:
can someone help me when I should use them in each case?
Do they have a big difference in performance with each other?
I am hesitant to use acdbentmake when the number of objects is large compared to the following two methods because I see very few examples that mention it.
I don't know what kind of entity You are creating but:
You don't need to use acdbEntMake in most cases. I'm using ObjectARX since about 8 years and never used it ;)
Transaction is used in .Net version of ObjectARX but You tagged visual-c++ so I suppose it's not this case.
If You warring about drawing large number of entities just test it. draw in the way You know and measure needed time. As long as You and Your clients accept drawing time, the way You are using is OK. In the future You always can refactor the code to get better performance if necessary.
To create for example line You may use this sample:
Acad::ErrorStatus AddLine(const AcGePoint3d SP , const AcGePoint3d EP , AcDbObjectId& id , AcDbObjectId Block )
{
AcDbLine* Line = new AcDbLine();
Line->setStartPoint(SP);
Line->setEndPoint(EP);
Acad::ErrorStatus es = Add( Line , Block );
if (es != Acad::eOk) { return es ;}
es = Line->close();
id = Line->objectId();
return es ;
}
Acad::ErrorStatus Add( AcDbEntity * pEnt, AcDbObjectId parent)
{
if ( !pEnt ) {
return Acad::eNullEntityPointer ;
}
Acad::ErrorStatus es;
if (parent.isNull()) {
parent = getActiveSpace()->objectId();
}
AcDbObject* pObj = NULL;
es = acdbOpenObject(pObj, parent , AcDb::kForWrite) ;
if (es != Acad::eOk) {
return es;
}
if (!pObj->isKindOf(AcDbBlockTableRecord::desc())) {
pObj->close();
return Acad::eWrongObjectType;
}
AcDbBlockTableRecord* Blok = AcDbBlockTableRecord::cast(pObj);
if ((es = Blok->appendAcDbEntity(pEnt)) != Acad::eOk )
{
Blok->close();
return es;
}
Blok->close();
return Acad::eOk;
}
AcDbBlockTableRecord* getActiveSpace()
{
AcDbBlockTableRecord* pOutVal = NULL;
AcDbDatabase * pDb = acdbHostApplicationServices()->workingDatabase();
if (!pDb) return NULL;
AcDbObjectId ActiveStpaceId = pDb->currentSpaceId();
AcDbObject* pObj = NULL;
Acad::ErrorStatus es;
es = acdbOpenObject(pObj, ActiveStpaceId , AcDb::kForRead);
if( es == Acad::eOk)
{
pOutVal = AcDbBlockTableRecord::cast(pObj);
es = pObj->close();
}
return pOutVal;
}

What's the simplest way to avoid game freezing while calling heavy functions in unity?

I'm currently developing a meta tic tac toe AI to play against in Unity. I would like to instantiate particles whenever the player plays a move. However, the particles freeze when the AI's move evaluation function is called. What is the simplest way to make the particles continue to move while the computer is calculating the best move? I've read documentation about unity Job system, but Job structs can't contain reference types which, here is a problem.
public void Play (List<Move> moves) {
int[,] grid = gB.stateGrid;
Move bestMove = new Move();
if (firstTurn)
{
bestMove.col = 4;
bestMove.row = 4;
firstTurn = false;
}
else
{
foreach (Move m in moves)
{
int e = EvalMove(m, level, true, grid, gB.subgridsStates, Mathf.NegativeInfinity, Mathf.Infinity);
m.value = e;
}
int best = moves.Select(x => x.value).Max();
List<Move> bestMoves = moves.Where(x => x.value == best).ToList();
bestMove = bestMoves[Random.Range(0, bestMoves.Count - 1)];
}
gB.PlaceToken(bestMove);
}
The function calculates the bestMove among all possible moves and then asks the game board script to place the corresponding token on the grid.
You can use Coroutine to do a heavy task and avoid freezing.
Change your method return type to IEnumerator
IEnumerator Heavy()
{
// some real heavy task
for (int i = 1; i < 10000000; i++)
{
balh();
yield return new WaitForEndOfFrame();
}
}
And simply call it like this:
StartCoroutine(Heavy());
You just have to run it on a MonoBehaviour. that's it.

Is it normal to solve a TSP with GA(Genetic Algorithyms) implementation takes much time?

I am working on GA for a project. I am trying to solve Travelling Salesman Problem using GA. I used array[] to store data, I think Arrays are much faster than List. But for any reason it takes too much time. e.g. With MaxPopulation = 100000, StartPopulation=1000 the program lasts to complete about 1 min. I want to know if this is a problem. If it is, how can I fix this?
A code part from my implementation:
public void StartAsync()
{
Task.Run(() =>
{
CreatePopulation();
currentPopSize = startPopNumber;
while (currentPopSize < maxPopNumber)
{
Tour[] elits = ElitChromosoms();
for (int i = 0; i < maxCrossingOver; i++)
{
if (currentPopSize >= maxPopNumber)
break;
int x = rnd.Next(elits.Length - 1);
int y = rnd.Next(elits.Length - 1);
Tour parent1 = elits[x];
Tour parent2 = elits[y];
Tour child = CrossingOver(parent1, parent2);
int mut = rnd.Next(100);
if (mutPosibility >= mut)
{
child = Mutation(child);
}
population[currentPopSize] = child;
currentPopSize++;
}
progress = currentPopSize * 100 / population.Length;
this.Progress = progress;
GC.Collect();
}
if (GACompleted != null)
GACompleted(this, EventArgs.Empty);
});
}
In here "elits" are the chromosoms that have greater fit value than the average fit value of the population.
Scientific papers suggest smaller population. Maybe you should follow what is written by the other authors. Having big population does not give you any advantage.
TSP can be solved by GA, but maybe it is not the most efficient approach to attack this problem. Look at this visual representation of TSP-GA: http://www.obitko.com/tutorials/genetic-algorithms/tsp-example.php
Ok. I have just found a solution. Instead of using an array with size of maxPopulation, change new generations with the old and bad one who has bad fitness. Now, I am working with a less sized array, which has length of 10000. The length was 1,000.000 before and it was taking too much time. Now, in every iteration, select best 1000 chromosomes and create new chromosomes using these as parent and replace to old and bad ones. This works perfect.
Code sample:
public void StartAsync()
{
CreatePopulation(); //Creates chromosoms for starting
currentProducedPopSize = popNumber; //produced chromosom number, starts with the length of the starting population
while (currentProducedPopSize < maxPopNumber && !stopped)
{
Tour[] elits = ElitChromosoms();//Gets best 1000 chromosoms
Array.Reverse(population);//Orders by descending
this.Best = elits[0];
//Create new chromosom as many as the number of bad chromosoms
for (int i = 0; i < population.Length - elits.Length; i++)
{
if (currentProducedPopSize >= maxPopNumber || stopped)
break;
int x = rnd.Next(elits.Length - 1);
int y = rnd.Next(elits.Length - 1);
Tour parent1 = elits[x];
Tour parent2 = elits[y];
Tour child = CrossingOver(parent1, parent2);
int mut = rnd.Next(100);
if (mutPosibility <= mut)
{
child = Mutation(child);
}
population[i] = child;//Replace new chromosoms
currentProducedPopSize++;//Increase produced chromosom number
}
progress = currentProducedPopSize * 100 / maxPopNumber;
this.Progress = progress;
GC.Collect();
}
stopped = false;
this.Best = population[population.Length - 1];
if (GACompleted != null)
GACompleted(this, EventArgs.Empty);
}
Tour[] ElitChromosoms()
{
Array.Sort(population);
Tour[] elits = new Tour[popNumber / 10];
Array.Copy(population, elits, elits.Length);
return elits;
}

j2me - Does a List have any property to keep track of 'key' that identify the item

How do you usually deal with Lists and the fact that they don't have a property to clearly identity an specific item ?
So far, the only solucion I could come up with is to put the key I use at the beginning, followed by a hyphen and the text that is shown on every item.
This way when I retrieve the text from the selected item I can get the key for the item.
This is how I do it, but surely there's gotta be a better solution and I'd really like that you could share your experience in this kind of scenarios.
Thanks in advance.
The picture ooks like you keep all the data managed in your application inside the text of the items of a standard list.
Better hava a separate class for the data container objects and an overview screen derived from List that takes an array of those container objects and instantiate the Items from that. This screen could then provide a method
DataContainer getSelectedObject()
which uses getSelectedIndex() internally to look up the object.
More specifically (Overview.java)
package mvc.midlet;
import javax.microedition.lcdui.List;
public class Overview extends List {
private final DomainObject[] data;
public static Overview create(DomainObject[] data) {
int i = 0;
for(; i < data.length; i++) {
if(data[i] == null) break;
}
String[] names = new String[i];
for(int j = 0; j < i; j++) {
names[j] = data[j].name;
}
return new Overview(names, data);
}
protected Overview(String names[], DomainObject[] data) {
super("Overview", IMPLICIT, names, null);
this.data = data;
}
public DomainObject getSelectedObject() {
return data[this.getSelectedIndex()];
}
}

help needed for making a calendar like MS Outlook?

i am doing work on an app like MS Outlook Calender where user can put events etc.
i am having problem with event object layout according to size etc. as user can drag and re size the event object in MS outlook calender and the size of event objects sets automatically.
i need the algorithm for doing so i have write my own but there are several problems help needed.
this screen shot will show the event object arrangement that is dynamic.
here is the ans
you can go for rectangle packing algorithm but keep in mind the events should be sorted w.r.t time and date and only horizontal packing will work for you
here is the rectangle packing algo
Since you're using Flex, this isn't a direct answer to your question, but it will hopefully set you down the right path.
Try taking a look at how FullCalendar's week and day views implement this. FullCalendar is a jQuery plugin that renders a calendar which does exactly what you're looking for.
You'll have to extract the rendering logic from FullCalendar and translate it to your project in Flex. I know JavaScript and ActionScript are very similar, but I've never used Flex — sorry I can't be more help in that area.
FullCalendar's repo is here. Specifically, it looks like AgendaView.js is the most interesting file for you to look at.
I think you are asking about a general object layout algorithm, right?
I am quite sure that this is a NP-complete problem: Arrange a set if intervals, each defined by a start and end as few columns as possible.
Being NP-complete means, that your best shot is probably trying out all possible arrangements:
find clusters in your objects -- the groups where you have something to do, where intervals do overlap.
for each cluster do
let n be the number of objects in the cluster
if n is too high (like 10 or 15), stop and just draw overlapping objects
generate all possible orderings of the objects in the cluster (for n objects, these are n! possible combinations, i.e. 6 objects, 120 possible orderings)
for each ordering lay out the objects in a trivial manner: loop through the elements and place them in an existing column if it fits there, start a new column if you need one.
keep the layout with the least columns
Here is how I did it:
Events are packet into columns variable by day (or some other rule)
Events in one column are further separated into columns, as long as there is a continuous intersection on the Y-axis.
Events are assigned their X-axis value (0 to 1) and their X-size (0 to 1)
Events are recursively expanded, until the last of each intersectioned group (by Y and X axis) hits the column barrier or another event, that has finished expanding.
Essentially it is a brute force, but works fairly quickly, since there are not many events that need further expanding beyond step 3.
var physics = [];
var step = 0.01;
var PackEvents = function(columns){
var n = columns.length;
for (var i = 0; i < n; i++) {
var col = columns[ i ];
for (var j = 0; j < col.length; j++)
{
var bubble = col[j];
bubble.w = 1/n;
bubble.x = i*bubble.w;
}
}
};
var collidesWith = function(a,b){
return b.y < a.y+a.h && b.y+b.h > a.y;
};
var intersects = function(a,b){
return b.x < a.x+a.w && b.x+b.w > a.x &&
b.y < a.y+a.h && b.y+b.h > a.y;
};
var getIntersections = function(box){
var i = [];
Ext.each(physics,function(b){
if(intersects(box,b) && b.x > box.x)
i.push(b);
});
return i;
};
var expand = function(box,off,exp){
var newBox = {
x:box.x,
y:box.y,
w:box.w,
h:box.h,
collision:box.collision,
rec:box.rec
};
newBox.x += off;
newBox.w += exp;
var i = getIntersections(newBox);
var collision = newBox.x + newBox.w > 1;
Ext.each(i,function(n){
collision = collision || expand(n,off+step,step) || n.collision;
});
if(!collision){
box.x = newBox.x;
box.w = newBox.w;
box.rec.x = box.x;
box.rec.w = box.w;
}else{
box.collision = true;
}
return collision;
};
Ext.each(columns,function(column){
var lastEventEnding = null;
var columns = [];
physics = [];
Ext.each(column,function(a){
if (lastEventEnding !== null && a.y >= lastEventEnding) {
PackEvents(columns);
columns = [];
lastEventEnding = null;
}
var placed = false;
for (var i = 0; i < columns.length; i++) {
var col = columns[ i ];
if (!collidesWith( col[col.length-1], a ) ) {
col.push(a);
placed = true;
break;
}
}
if (!placed) {
columns.push([a]);
}
if (lastEventEnding === null || a.y+a.h > lastEventEnding) {
lastEventEnding = a.y+a.h;
}
});
if (columns.length > 0) {
PackEvents(columns);
}
Ext.each(column,function(a){
a.box = {
x:a.x,
y:a.y,
w:a.w,
h:a.h,
collision:false,
rec:a
};
physics.push(a.box);
});
while(true){
var box = null;
for(i = 0; i < physics.length; i++){
if(!physics[i].collision){
box = physics[i];
break;
}
}
if(box === null)
break;
expand(box,0,step);
}
});
Result: http://imageshack.com/a/img913/9525/NbIqWK.jpg

Resources