Revit: Why is my wall BoundaryBox not Coinciding with its LocationCurve? - revit-api

Using Revit API, I split a wall in 3 parts. To do that, I create 3 Lines:
Line.CreateBound(p1, p2)
Line.CreateBound(p2, p3)
Line.CreateBound(p3, p4)
Then I create a wall with each of these lines, which are consecutive and aligned. The result isn't as expected, as the third wall is overlapping the second one, see the illustration below.
Now, this could be a programming error, but I print the Lines' end points just before creating the 3 walls, and these points are perfectly consecutive, in the right order. The print looks like this (I remove the Y and Z coordinates, they're constant):
Now creating a new wall, from (11.811023622, ...) to (17.388451444, ...)
Now creating a new wall, from (17.388451444, ...) to (18.044619423, ...)
Now creating a new wall, from (18.044619423, ...) to (28.871391076, ...)
If I then use the RevitLookup Addin to check that problematic wall, I see that its LocationCurve's origin is indeed located at (18.044619423, ...).
But if I look at it's BoundingBox Min and Max properties, I can see that it starts at 17.388.. and goes up to 28.871391076. That is what the illustration shows..
Furthermore I use this split method on some other walls in my geometry, for which I have no problems, and I do obtain 3 nicely consecutive walls!
My question therefore is: Am I missing a property somewhere that would somehow 'shift' the wall BoundingBox from its Location Curve?? That would explain somehow this behavior?
How else could I explain and correct this?
Thanks a lot!
Arnaud.

Maybe Revit is automatically connecting the walls somehow, and modifying their geometry in order to connect them well. Imagine, for example, two perpendicular walls along the X and Y axis, from (0,0) to (1,0) and (0,1), respectively, with a wall thickness of 0.2. Revit will connect them. To do so, it will extend them in the corner where they meet at the origin. Due to that, their bounding boxes both do not end at (0,0) (or at (0,-0.1) and (-0.1,0)), as you might expect. Instead, they will both have a common corner at (-0.1,-0.1). Thus, both bounding boxes will have a maximal extension of 1.1 instead of 1.0. I hope this explanation is clear. A picture would say more than a thousand words... sorry about the stupid attempt using words instead.

You may be able to prevent wall 3 from joining up with wall 1 by setting the location line JoinType property on both of them to JoinType.None.

EDIT: Using WallUtils.DisallowWallJoinAtEnd function did the trick!
So this is the status after some investigation: The third wall is indeed auto-expanding its BoundaryBox so that it connects to the first wall. And doing that, it overlaps the small wall (see "wall 2 in the picture below - this wall is of different type than walls 1 and 3 (which are of same type), hence being ignored when wall 3 is looking for somewhere to connect) in between them (that was only 20cm long).
Making "Wall 2" a bit longer (40 cm) helps and prevents the 3rd wall from auto-expanding to the first wall, that is what I did here:
Then it's ok. But this doesn't solve the problem. I didn't see any way of preventing the "auto-expansion" of the BoundingBox, or any way to control the max distance at which it looks for another wall.
I also tried first imposing 3 different types, and then changing wall type of wall 3 to the same wall type as wall 1: when their wall types are different: no expansion. When I change the wall type, it expands, even though the wall was already created.
Finally, the really strange behavior is that for some walls, I don't have this problem at all. This is: 3 walls of the same types as when I do have the problem, with same length of 20cm for wall 2. This last thing is really unexplained.

Related

Is spaces.Discrete appropriate for representing an integer range?

I want to represent something like hit points ranging 0-100 in my environment's observation space, along with some other stuff. My first idea was to use Tuple to tie together a Box for hit points with some Discretes for other things.
However, I've been using Stable Baselines, and it appears they don't support Tuple. I do see support for MultiDiscrete, and I could stuff hit points into a Discrete(100), but I worry that this would lose the context that 51 is very close to 52, and not completely different like 1.
Is using Discrete like this wrong?

How do I rotate an object so that it's always facing the mouse position?

I'm using ggez to make a game with some friends, and I'm trying to have our character rotate to face the pointer at all times. I know so far that I need to get an angle value (f32) in radians, and I think I can use atan2 to get this (?) However, I just don't get the behavior that I want.
This is the code I have: (btw, move_data is a struct holding our player character's values, such as position, velocity, angle and rotation speed).
let m = mouse::position(ctx);
move_data.angle = ((m.y - move_data.position.y).atan2(move_data.position.x - m.x)) * (consts::PI / 2.0) as f32;
I think that I'm close, as I'm already able with this to rotate the character, but only in a sort of 'incomplete' way. The player character (pc) can mostly only face to the upper left corner, when I move the mouse there. Otherwise, if the pointer is to the right and/or below the pc, it rotates in a very slow and minor way, and stops facing the pointer. I don't know if this description makes sense.
I think the problem is that I'm not entirely sure what atan2 is doing in the first place (I only remember some basic trigonometry), and I am also not sure if I'm using it correctly, so I don't exactly know what my code is doing. (Here is the documentation I used for atan2: https://doc.rust-lang.org/std/primitive.f64.html#method.atan2)
I've gotten only so far after much trial and error, Googling as much as I can (mostly Unity tutorial results showed up when looking for algorithms to base my code off of) and I've also asked in the unofficial Rust community Discord server, but nothing so far has worked.
I also had this code earlier, but couldn't find how to make it work either.
let m = mouse::position(ctx); // Type Point2
let mouse_pos = Vector2::new(m.x, m.y); // Transformed to Vector2 to be read by Matrix
move_data.angle = Matrix::angle(&mouse_pos, &move_data.position);

Adjusting initial bullet angle to match user set distance(scope zero) (for math gods)

So my question is pretty specific, which means it was pretty hard to find anything that could help me on google or stackoverflow.
I want to give users the ability to set the distance/range on their guns. I have almost everything I need to make this happen, I just don't have the angle that I need to add on to the direction angle at which the bullet comes from. I don't know what equation/formula I would need to get this. I am not looking for anything code-specific, just an idea of what/how to do this.
Since I do not know what formula to use, I just started messing around with some numbers with this formula I found:
(This formula applies to actual sniper)
Range = 1000 * ActualTargetHeight/TargetHeightInMils(on the scope)
BulletDrop = BulletDropSpeed*Range^2/2*VelocityOfTheBullet
MilsToRaiseScope = 1000 * BulletDrop * RangeToTarget
I just replaced Range with the whatever zero the user is on.
I have a feeling I would just toss the MilsToRaiseScope into a trigonometry function. But I'm not sure.
If anyone is confused as to what I'm talking about, you can find an example of what I want in Battlefield 4 or any of the Arma games. With snipers, you can zero in the scope on to whatever distance you need so you won't have to adjust for bullet drop on the scope.
Sorry for the long question, just want to make sure everyone understands! :)
Mils corresponds to (military) angular measurement unit of 1/1000 of radian, so it is ready-to-use angle
Second formula looks strange. Height loss depends on time of flight:
dH = g*t^2/2 = g * (Range / VelocityOfTheBullet)^2 / 2
where g is 9.81 m/sec^2
I am using a 2D table look up for this.
I generate the table by doing a whole bunch of test firings at different angles, and record the path of the bullet for each angle.
To analytically determine this, it can get quite complex if aerodynamic drag is involved.
It is also discussed on this game-specific question.
For inspiration, this animated gif is great.

Transparency in Progressive Photon Mapping in cuda

I am working on a project, which is based on optix. I need to use progressive photon mapping, hence I am trying to use the Progressive Photon Mapping from the samples, but the transparency material is not implemented.
I've googled a lot and also tried to understand other samples that contains transparency material (e.g. Glass, Tutorial, whitted). At last, I got the solution as follows;
Find the hit point (intersection point) (h below)
Generate another ray from that point
use the color of the new generated points
By following you can also find the code of that part, by I do not understand why I get black color(.0f, .0f, 0.f) for the new generated ray (part 3 above).
optix::Ray ray( h, t, rtpass_ray_type, scene_epsilon );
HitPRD refr_prd;
refr_prd.ray_depth = hit_prd.ray_depth+1;
refr_prd.importance = importance;
rtTrace( top_object, ray, refr_prd );
result += (1.0f - reflection) * refraction_color * refr_prd.attenuation;
Any idea will be appreciated.
Please note that refr_prd.attenuation should contains some colors, after using function rtTrace(). I've mentioned reflection and reflaction_color to help you better understand the procedure. You can simply ignore them.
There are a number of methods to diagnose your problem.
Isolate the contribution of the refracted ray, by removing any contribution of the reflection ray.
Make sure you have a miss program. HitPRD::attenuation needs to be written to by all of your closest hit programs and your miss programs. If you suspect the miss program is being called set your miss color to something obviously bad ([1,0,1] is my favorite).
Use rtPrintf in combination with rtContextSetPrintLaunchIndex or setPrintLaunchIndex to print out the individual values of the product to see which term is zero from a given pixel. If you don't restrict the output to a given launch index you will get too much output. You probably also want to print out the depth as well.

Prove that the segments formed by two internal segments to a circle crossing each other when multiplied are equal

This is a simple math question to which I had an answer a long time ago but that doesn't come to mind immediately right now.
Using the picture attached, how would I prove that ab = cd
Illustration http://img3.imageshack.us/img3/8254/circledo.jpg
This seems to be what you're after:
http://www.cut-the-knot.org/proofs/IntersectingChordsTheorem.shtml

Resources