Does anyone know if ReferenceIntersector works with TopografySurfaces? Cannot make it work. I need to find a point on the surface based on a intersection with a line.
Did you solve it? If not, i gave it a try and for me this Code here works fine:
public XYZ ProjectPointOnTopographySurface(XYZ point, int direction)
{
// For getting the 3D view
View3D view3D = new FilteredElementCollector(Document)
.OfClass(typeof(View3D))
.Cast<View3D>()
.Where(v => v.Name == "{3D}")
.FirstOrDefault();
XYZ vectorDirection = new XYZ(0, 0, direction);
ElementClassFilter intersectionFilter = new ElementClassFilter(typeof(TopographySurface));
ReferenceIntersector referenceIntersector = new ReferenceIntersector(intersectionFilter, FindReferenceTarget.All, view3D);
ReferenceWithContext referenceWithContext = referenceIntersector.FindNearest(point, vectorDirection);
return referenceWithContext.GetReference().GlobalPoint;
}
Regardless of whether the ReferenceIntersector does or does not work with topography surfaces, you can pretty easily solve the problem you describe yourself using other means. Simply ask the surface for its tessellated representation. That will return a bunch of triangles. Then, implement your own algorithm to intersect a triangle with the line. That should give you all you need, really.
Related
I am trying to create a wall with 2 layers and each layer materials are different. When I try to set the CompoundStructure for the wall I am getting an exception that CompoundStructure is not valid.
CompoundStructure cStructure = CompoundStructure.CreateSimpleCompoundStructure(clayer);
wallType.SetCompoundStructure(cStructure);
Can anyone tell me how I can create compound structure for layers with different materials?
First of all, solve your task manually through the end user interface and verify that it works at all.
Then, use RevitLookup and other database exploration tools to examine the results in the BIM elements, their properties and relationships.
Once you have done that, you will have a good idea how to address the task programmatically – and have confidence that it will work as expected:
How to research to find a Revit API solution
Intimate Revit database exploration with the Python Shell
newWallMaterial = wallMaterial.Duplicate("newCreatedMaterial");
newWallmaterial2 = wallMaterial.Duplicate("NewCreatedMAterial2");
//roofMaterial3 = roofMaterial2.Duplicate("NewCreatedMAterial3");
bool usr = newWallMaterial.UseRenderAppearanceForShading;
//newWallMaterial.Color = BuiltInTypeParam.materialCol;
foreach (Layers layer in layers)
{
if (layer.layerId == 0)
{
c = new CompoundStructureLayer(layer.width, layer.materialAssignement, newWallMaterial.Id);
newWallMaterial.Color = color;
clayer.Add(c);
}
if (layer.layerId == 1)
{
c1 = new CompoundStructureLayer(layer.width, layer.materialAssignement, newWallmaterial2.Id);
newWallmaterial2.Color = color;
clayer.Add(c1);
}
I've been struggling with this issue off and on for the better part of a year.
As the title says, i wish to dimension from one side of a wall, to both sides of openings (door openings), then terminate at the other end of the wall (vertically and horizontally). I also wish to dimension to all families hosted in the wall, but i have been able to accomplish this using ScottWilson's voodo magic helper class. Found Here: http://thebuildingcoder.typepad.com/blog/2016/04/stable-reference-string-magic-voodoo.html
foreach (ElementId ele in selSet) {
FamilyInstance fi = doc.GetElement(ele) as FamilyInstance;
Reference reference = ScottWilsonVoodooMagic.GetSpecialFamilyReference(fi,ScottWilsonVoodooMagic.SpecialReferenceType.CenterLR,doc);
refs.Append(reference);
pts[i] = ( fi.Location as LocationPoint ).Point;
i++;
}
XYZ offset = new XYZ(0,0,4);
Line line = Line.CreateBound( pts[0]+offset, pts[selSet.Count - 1]+offset );
using( Transaction t = new Transaction( doc ) )
{
t.Start( "dimension embeds" );
Dimension dim = doc.Create.NewDimension(doc.ActiveView, line, refs );
t.Commit();
}
The problem lies in determining the appropriate stable references to the wall faces. I am able to find all faces on a wall, but this gives me 100+ faces to sort through.
If anyone can help me it would be greatly appreciated!
Side note: The closest of gotten is using a casting a ray trace through my panel, then using a reference intersector method to determine references. But i'm really struggling with implementation: http://thebuildingcoder.typepad.com/blog/2015/12/retrieving-wall-openings-and-sorting-points.html
These two posts should provide more than enough to solve all your issues:
Dimension walls by iterating their faces
Dimension walls by shooting a ray
Basically, you need to obtain references to the faces or edges that you wish to attach the dimensioning to. These references can be obtained in several ways. Two common and easy approaches are:
Retrieve the element geometry using the option ComputeReferences set to true and extract the specific face required.
Shoot a ray through the model to determine the required element and its face using the 2017
ReferenceIntersector Class.
I have been running SpatialUnderstandingExample scene from holo-toolkit. Couldnt figure out how to place my objects into the scene. I want to replace those small boxes that comes default with my own objects. How can I do that?
Thanks
edit: found the draw box but how do i push my object there?
edit2: finally pushed an object at the position but still code is very complicated its messing up with the size and shape of my object. Will try to make it clean and neat.
It's been a while since I've looked at that example so hopefully I remember its method name's correctly. It contains a "DrawBox" method that is called after a successful call to get a location from spatial understanding. The call that creates the box looks something like this:
DrawBox(toPlace, Color.red);
Replace this call with the following (assuming "toPlace" contains the results from the spatial understanding call and "model" contains the model you are trying to place there):
var rotation = Quaternion.LookRotation(toPlace.Normal, Vector3.up);
// Stay center in the square but move down to the ground
var position = toPlace.Postion - new Vector3(0, RequestedSize.y * .5f, 0);
// instantiate the hologram from a model
GameObject newObject = Instantiate(model, position, rotation) as GameObject;
if (newObject != null)
{
// Set the parent of the new object the GameObject it was placed on
newObject.transform.parent = gameObject.transform;
}
I have a canvas.
In this canvas I have a closed path, and I am trying to morph this path into a different path.
Paths can have any number of points(>=3).
I have two paths:
var path1 = "M50,50L200,50,200,200,50,200z"
var path2 = "M300,200L50,200,50,50,200,50z"
This is what I'm using to animate the morphing:
var path = paper.path(path1).attr({'stroke':'black','fill':'white'})
var currentPath = 1
path.click(function () {
if(currentPath==1) {
path.stop().animate({d: path2},2000, function () {
currentPath=2
})
}
else {
path.stop().animate({d: path1},2000,function () {
currentPath=1
})
}
})
This is the situation I want to achieve:
http://jsfiddle.net/MichaelSel/vgw3qxpg/7/
This is the situation I want to avoid.
http://jsfiddle.net/MichaelSel/vgw3qxpg/6/
Is there any way to tell snap to do the animation by using the shortest distance to each point?
Note: I cannot just 'rewrite' the paths in reverse order (which would fix them) because it's the client who positions the points arbitrarily.
What can I do?
I would love to add more details if my question is unclear.
Thank you all.
No, there is no way to do this (other than coding your own complex morphing with checks).
It uses basic interpolation between the points, so its important to get the devs to move the points in their svg creation rather than just creating a new svg from scratch that could have points start from any position.
There is no reason though to my knowledge why you couldn't still rotate the path points with a bit of clever code though, even if the client has positioned them arbitrarily, but I don't think thats simple.
I've recently started learning HLSL after deciding that I wanted better lighting than what BasicEffect offered. After going through many tutorials, I found this and decided to learn from it:
http://brooknovak.wordpress.com/2008/11/13/hlsl-per-pixel-point-light-using-phong-blinn-lighting-model/
It seems that the shader above doesn't work very well in my game though, because my game uses a tile based approach, which means multiple models in a grid-like formation.
What happens is that each of my tiles gets shaded separately from the others. Please see this image for a visual reference:
http://i.imgur.com/1Sfi2.png
I understand that this is because each tile has it's own model and the shader doesn't take into account other models as it's executing on the meshes of a model.
Now, for the question. How does one go about to shade all the tiles together? I understand that I may have to write a shader from scratch to accomplish this, but if anyone could give me some tips on how to achieve the effect I want, I'd really appreciate it.
It's late so there's a possibility that I've forgotten something. If you need more information, please tell me and I'll add it.
Thanks, Merigrim
EDIT:
Here is my code for drawing a model:
public void DrawModel(Model model, Matrix modelTransform, Matrix[] absoluteBoneTransforms, Vector3 color, float alpha = 1.0f, Texture2D texture = null)
{
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
foreach (ModelMesh mesh in model.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
part.Effect = effect;
Matrix world = absoluteBoneTransforms[mesh.ParentBone.Index] * modelTransform;
effect.Parameters["World"].SetValue(absoluteBoneTransforms[mesh.ParentBone.Index] * modelTransform);
effect.Parameters["View"].SetValue(camera.view);
effect.Parameters["Projection"].SetValue(camera.projection);
effect.Parameters["CameraPos"].SetValue(camera.cameraPosition);
Vector3 lookAt = camera.cameraPosition + camera.cameraDirection;
effect.Parameters["LightPosition"].SetValue(new Vector3(lookAt.X, 1.0f, lookAt.Z - 5.0f));
effect.Parameters["LightDiffuseColor"].SetValue(new Vector3(0.45f, 0.45f, 0.45f));
effect.Parameters["LightSpecularColor"].SetValue(new Vector3(0.45f, 0.45f, 0.45f));
effect.Parameters["LightDistanceSquared"].SetValue(40.0f);
effect.Parameters["DiffuseColor"].SetValue(color);
effect.Parameters["AmbientLightColor"].SetValue(Color.Black.ToVector3());
effect.Parameters["EmissiveColor"].SetValue(Color.White.ToVector3());
effect.Parameters["SpecularColor"].SetValue(Color.White.ToVector3());
effect.Parameters["SpecularPower"].SetValue(10.0f);
if (texture != null)
{
effect.Parameters["DiffuseTexture"].SetValue(texture);
}
mesh.Draw();
}
}
pass.Apply();
}
}
It seems that the normals were the villain this time around. After correcting the normals in Blender, everything seems to work now.
I want to thank meds and Andrew Russell. Without your help I wouldn't have figured it out!
So now I know, when you have problems with your lighting, always check the normals first.