I'm having problems removing the blue colored volumes from my COMSOL geometry. I want to remove them in a way that the resulting pipe system doesn't have any holes in it. Another way to put it: I want to cut off the "excess pipe".
I have tried all boolean operations COMSOL provides, but nothing seems to be useful to my problem. Which COMSOL tool can i use to achieve my geometry described above?
It is certainly possible to achieve this using a combination of boolean operations. But generally, the easiest way is to unite the objects, split them along interior boundaries, and delete the excess volumes:
Add a "Union" operation.
Add all objects or the relevant subset to the "input objects" selection.
Leave the option "Keep interior boundaries" checked (the default).
Add a "Split" operation and select the union as input object.
Add "Delete Entities" and select "Object" from "Geometric entity level".
Select all excess volumes.
Obviously, the original objects all have to properly overlap for this to work.
Related
Our plugin maintains some instance parameter values across many elements, including those in groups.
Occasionally the end users will introduce data that activates an unused Category,
so we have to update the document parameter bindings, to include those categories. However, when we call
doc.ParameterBindings.ReInsert()
our existing parameter values inside groups are lost, because our VariesAcrossGroups flag is toggled back to false?
How did Revit intend this to work - are we supposed to use this in a different way, to not trigger this problem?
ReInsert() expects a base Definition argument, and would usualy get an ExternalDefinition supplied.
To learn, I instead tried to scan through the definition-keys of existing bindings and match those.
This way, I got the document's InternalDefinition, and tried calling Reinsert with that instead
(my hope was, that since its existing InternalDefinition DID include VariesAcrossGroups=true, this would help). Alas, Reinsert doesn't seem to care.
The problem, as you might guess, is that after VariesAcrossGroups=False, a lot of my instance parameters have collapsed into each other, so they all hold identical values. Given that they are IDs, this is less than ideal.
My current (intended) solution is to instead grab a backup of all existing parameter values BEFORE I update the bindings, then after the binding-update and variesAcrossGroups back to true, then inspect all values and re-assign all parameter-values that have been broken. But as you may surmise, this is less than ideal - it will be horribly slow for the users to use our plugin, and frankly it seems like something the revitAPI should take care of, not the plugin developer.
Are we using this the wrong way?
One approach I have considered, is to bind every possibly category I can think of, up front and once only. But I'm not sure that is possible. Categories in themselves are also difficult to work with, as you can only create them indirectly, by using your Project-Document as a factory (i.e. you cannot create a category yourself, you can only indirectly ask the Document to - maybe! - create a category for you, that you request). Because of this, I don't think you can bind for all categories up front - some categories only become available in the document, AFTER you have included a given family/type in your project.
To sum it up: First, I
doc.ParameterBindings.ReInsert()
my binding, with the updated categories. Then, I call
InternalDefinition.SetAllowVaryBetweenGroups()
(after having determined IDEF.VariesAcrossGroups has reverted back to false.)
I am interested to hear the best way to do this, without destroying the client's existing data.
Thank you very much in advance.
(I'm not sure I will accept my own answer).
My answer is just, that you can survive-circumvent this problem,
by scanning the entire revit database for your existing parmater values, before you update the document bindings.
Afterwards, you reset VariesAcrossGroups back to its lost value.
Then, you iterate through your collected parameters, and verify which ones have lost their original value, and reset them back to their intended value.
One trick that speeds this up a bit, is that you can check Element.GroupId <> -1. That is, those elements that are group members.
You only need to track elements which are group members, as it's precisely those that are affected by this Revit bug.
A further tip is, that you should not only watch out for parameter-values that have lost their original value. You must also watch out for parameter-values that have accidentally GOTTEN a value, but which should be left un-set.
I just use FilteredElementCollector with WhereElementIsNotElementType().
Performance-wise, it is of course horrible to do all this,
but given how Revit behaves, I see no other solution if you have to ship to your clients.
I am new in doing an activity and currently, I am trying to draw one based on given description.
I enter into doubt on a particular section as I am unsure if it should be 'split'.
Under the "Employee", the given description is as follows:
Employee enter in details about physical damage and cleanliness on the
machine. For the cleanliness, there must be a statement to indicate
that the problem is no longer an issue.
As such, I use a foreach as a means to describe that there should be 2 checks - physical and cleanliness (see diagram in the link), before it moves on to the next activity under the System - for the system to record the checks.
Thus, am I on the right track? Thank you in advance for any replies.
Your example is no valid UML. In order to make it proper you need to enclose the fork/join in a expansion region like so:
A fork/join does not accept any sematic labels. They just split the control flow into several parallel ones which join at the end.
However, this still seems odd since you would probably have some control for the different inspections being entered. So I'd guess there's a decision which loops through multiple inspection entries. Personally I use regions only for handling interrupts. ADs are nice to a certain level. But sometimes a tabular text (like suggested by Cockburn) is just easier to write and read. Graphical programming is not the ultimate answer (unlike 42).
First, the 'NO' branch of the decision node must lead somewhere (at the end?).
After, It differs if you want to show the process for ONE or MULTIPLE inspections. But the most logical way is to represent the diagram for an inspection, because you wrote inspection without S ! If you want represent more than one inspection, you can use decision and merge node to represent loop that stop when there is no more inspection.
I'm having trouble finding a way to solve this specific problem using MeshLab.
As you can see in the figure, the mesh with which I'm working presents some cracks in certain areas, and I would like to try to close them. The "close holes" option does not seem to work because, being technically cracks and not holes, it seems not to be able to weld them.
I managed to get a good result using the "Screened Poisson Surface Reconstruction" option, but using this operation (rebuilding the whole mesh topology), I would lose all the information about the mesh's UVs (and I can not afford to lose them).
I would need some advice to find the best method to weld these cracks, which does not change the vertices that are not along them, adding only the geometry needed to close the mesh (or, ideally, to make a weld using the existing edges along the edge).
Thanks in advance!
As answered by A.Comer in a comment to the main question, I was able to get the desired result simply by playing a bit with the parameters of the "close holes" tool.
Just for the sake of completeness, here is a copy of the comment:
The close holes option should be able to handle this. Did you try changing the max size for that filter to a much larger number? Do filters >> selection >> select border and put the number of selected faces as the max size into that filter – A.Comer
I'm trying to plug a (very) simple graph layout algorithm into my GEF editor. I do it by simply adding calculateX() and calculateY() methods to my NodeEditParts' refreshVisuals() (the graph figure has an XYLayout obviously).
It does work, albeit only for those nodes, which have a connection to another node, of which they are the source. When I try to access the constraints for nodes to which the node in question has a connection, of which it is the target, I get a NullPointerException.
I'm guessing that this is to do with the order in which nodes are drawn in GEF.
I'm also guessing that there is no such thing as an element parser checking which elements will have to be drawn first, but rather that elements are either drawn in the order they appear in a List, or concurrently via the EditPartFactory (which, however, must get its input from some sort of ordered collection in the model).
But how is it really done?
In GEF the elements are drawn in the order they appear in the list returned by getModelChildren() (I don't remember if from start to end or backwards, but you can check the code)
Nevertheless, I couldn't understand what exactly was your problem, so if you can provide more details I may help you some more.
I have a set of objects that can be scaled and translated.
Suppose the user selects an object and drag to some position.
I was thinking about implementing this in two different ways: either changing the coordinates of the objects given the mouse position, or changing the transformation matrix.
Is one of these implementations better than the other?
My main issues are:
Performance
Code organization
Scalability
Objects have certain coordinates, and the way you look at objects has a certain frame of reference. I think it is better not to mess with your coordinates, and instead to change just the matrix that takes you from "the object is here" to "I draw the object here". It is much cleaner. Performance wise you have to apply a transformation to each object being rendered, so you may as well do it just once. From. Code organization perspective it is better to keep things "relating to something physical"; and from a scalability perspective, not applying a transformation to all objects every time the user changes the view is clearly preferable - you only apply the transformation to objects when you render them, so if you can't keep up you skip a step; if you didn't rescale some of your objects during each step you would quickly get into trouble. Finally, applying multiple transformations to the same object would tend to accumulate errors.
Stream of conscience, but clear preference, I think!