How to assign parameters with angle in Dimension built-in group in Revit Family using Revit API - revit-api

I am working on a project where I have to assign angle parameters to Dimension built-in group in Revit API c#
Here the angles are available in degrees as shown below
When I set the value in degree (say 11.5 degree) directly I get an error which says "Constraints not satisfied"
So I added the code to convert degree to radian, but even this fails.
My current code below
FamilyManager famManager = famDoc.FamilyManager;
double angle = 11.25; //value in degree
double dn = angle * Math.PI / 180; //Converting degree to radian
FamilyParameter fp = familyManager.get_Parameter("BEND ANGLE");
if (fp != null)
{
familyManager.Set(fp, values);
}
Another method that I tried
double dn = UnitUtils.ConvertToInternalUnits(angle, DisplayUnitType.DUT_DECIMAL_DEGREES);
I am getting the same error.
Let me know if I am doing something wrong.

Your code is correct.
Assuming "values" is equal to "dn".The problem is that you cannot assign the value of 11.25° to this parameter, try to do it manually and you will get the same errar message. There is a problem with the constraints of your model.

Related

how to use IntersectWithLine function in vtk?

So I have a point, and create a line in the z axis to see the point of intersection with a certain mesh (to project the point on the mesh on the z axis).
So I create a vtkCellLocator, but what are each of the paramter of the function? It is not described at all in the documentation :
int vtkCellLocator::IntersectWithLine(double a0[3], double a1[3], double tol,
double& t, double x[3], double pcoords[3],
int &subId, vtkIdType &cellId,
vtkGenericCell *cell);
I've tested a bit, and it seems that a0 and a1 are the endpoints of our line, and x is the found intersection point values and cellid the cellid of the intersection point.
What does the rest means? What happens if I have multiple points of intersection? How does it choose the "best" cell of intersection from all the points of intersection?
The parameters for the IntersectWithLine are derived from vtkCell class. It is a bit buried you can see the detailed description here for the parameters. The implementation in vtkCellLocator uses this call to vtkCell::IntersectWithLine to define the parameters.
virtual int vtkCell::IntersectWithLine ( const double p1[3],
const double p2[3], double tol, double & t, double
x[3], double pcoords[3], int & subId )
Intersect with a ray.
Return parametric coordinates (both line and cell) and global
intersection coordinates, given ray definition p1[3], p2[3] and
tolerance tol. The method returns non-zero value if intersection
occurs. A parametric distance t between 0 and 1 along the ray
representing the intersection point, the point coordinates x[3] in
data coordinates and also pcoords[3] in parametric coordinates. subId
is the index within the cell if a composed cell like a triangle strip.
The cellId that is returned is based on the parametric distance to the intersecting cell. So the returned cellId is the cell that minimizes the member function vtkCell::GetParametricDistance
virtual double vtkCell::GetParametricDistance ( const double pcoords[3] )
Return the distance of the parametric coordinate provided to the
cell.
If inside the cell, a distance of zero is returned. This is used
during picking to get the correct cell picked. (The tolerance will
occasionally allow cells to be picked who are not really intersected
"inside" the cell.)
Therefore it should be the cell that intersects the line within the tolerance that is closest to p1
Sorry, I don't have a direct answer (well...the &t is probably the parameter of the line where the intersection happens, the cellId is the id of the found cell and the cell is a pointer to the found cell (but you can just use the cellId to get it)). But I do have an advice as someone who works with VTK often: use the fact that it is open source - just download the VTK sources and look directly into them to find your answers. Trust me that especially if you plan to work with VTK regularly, this will save you a lot of time in the end. The documentation is sadly sometimes a bit vague :(

convert between image coordinates (i-j-k) and world coordinates (x-y-z) vtk in C#

Does anyone knows how can I convert from image coordinates acquired like this:
private void renderWindowControl1_Click(object sender, System.EventArgs e)
{
int[] lastPos = this.renderWindowControl1.RenderWindow.GetInteractor().GetLastEventPosition();
Z1TxtBox.Text = (_Slice1 + 1).ToString();
X1TxtBox.Text = lastPos[0].ToString();
Y1TxtBox.Text = (512 - lastPos[1]).ToString();
}
into physical coordinates.
TX Tal
VTK may have an elegant method call, but in general you will need to use the information in your image's image plane module (specifically Equation C.7.6.2.1-1).
http://dicom.nema.org/medical/dicom/current/output/html/part03.html#sect_C.7.6.2
in order to convert between a click and physical location:
There is some insights I got from working on this project:
int[] lastPos = this.renderWindowControl1.RenderWindow.GetInteractor().GetLastEventPosition();
returns the pixel location of the click in the control. It is a problem because if the user zooms in, lastPos does not represent the location in the dicom.
The solution I have found, was to use vtkPropPicker class. Code example can be found here and here.
image_coordinate are in world coordinates but without the origin offset. which mean, that:
1. if we want to get the pixel location (in 512x512 grid): the x,y value should be normalized by pixel spacing, and image orientation. the value of these parameters can be acquired using the equation mentioned in the answer above me Equation C.7.6.2.1-1.
vtkDICOMImageReader _reader;
reader.GetPixelSpacing();
reader.GetImageOrientationPatient();
If we need world physical location, we should add the origin offset for x and y:
reader.GetDataOrigin();
As for Z axis: I didn't need it, so I am not sure.
That is my dime on the matter, maybe there are some more elegant ways, I haven't found them.

Plotting graphs using Bezier curves

I have an array of points (x0,y0)... (xn,yn) monotonic in x and wish to draw the "best" curve through these using Bezier curves. This curve should not be too "jaggy" (e.g. similar to joining the dots) and not too sinuous (and definitely not "go backwards"). I have created a prototype but wonder whether there is an objectively "best solution".
I need to find control points for all segments xi,y1 x(i+1)y(i+1). My current approach (except for the endpoints) for a segment x(i), x(i+1) is:
find the vector x(i-1)...x(i+1) , normalize, and scale it by factor * len(i,i+1) to give the vector for the leading control point
find the vector x(i+2)...x(i) , normalize, and scale it by factor * len(i,i+1) to give the vector for the trailing control point.
I have tried factor=0.1 (too jaggy), 0.33 (too curvy) and 0.20 - about right. But is there a better approach which (say) makes 2nd and 3nd derivatives as smooth as possible. (I assume such an algorithm is implemented in graphics packages)?
I can post pseudo/code if requested. Here are the three images (0.1/0.2/0.33). The control points are shown by straight lines: black (trailing) and red (leading)
Here's the current code. It's aimed at plotting Y against X (monotonic X) without close-ing. I have built my own library for creating SVG (preferred output); this code creates triples of x,y in coordArray for each curve segment (control1, xcontrol2, end). Start is assumed by last operation (Move or Curve). It's Java but should be easy to interpret (CurvePrimitive maps to cubic, "d" is the String representation of the complete path in SVG).
List<SVGPathPrimitive> primitiveList = new ArrayList<SVGPathPrimitive>();
primitiveList.add(new MovePrimitive(real2Array.get(0)));
for(int i = 0; i < real2Array.size()-1; i++) {
// create path 12
Real2 p0 = (i == 0) ? null : real2Array.get(i-1);
Real2 p1 = real2Array.get(i);
Real2 p2 = real2Array.get(i+1);
Real2 p3 = (i == real2Array.size()-2) ? null : real2Array.get(i+2);
Real2Array coordArray = plotSegment(factor, p0, p1, p2, p3);
SVGPathPrimitive primitive = new CurvePrimitive(coordArray);
primitiveList.add(primitive);
}
String d = SVGPath.constructDString(primitiveList);
SVGPath path1 = new SVGPath(d);
svg.appendChild(path1);
/**
*
* #param factor to scale control points by
* #param p0 previous point (null at start)
* #param p1 start of segment
* #param p2 end of segment
* #param p3 following point (null at end)
* #return
*/
private Real2Array plotSegment(double factor, Real2 p0, Real2 p1, Real2 p2, Real2 p3) {
// create p1-p2 curve
double len12 = p1.getDistance(p2) * factor;
Vector2 vStart = (p0 == null) ? new Vector2(p2.subtract(p1)) : new Vector2(p2.subtract(p0));
vStart = new Vector2(vStart.getUnitVector().multiplyBy(len12));
Vector2 vEnd = (p3 == null) ? new Vector2(p2.subtract(p1)) : new Vector2(p3.subtract(p1));
vEnd = new Vector2(vEnd.getUnitVector().multiplyBy(len12));
Real2Array coordArray = new Real2Array();
Real2 controlStart = p1.plus(vStart);
coordArray.add(controlStart);
Real2 controlEnd = p2.subtract(vEnd);
coordArray.add(controlEnd);
coordArray.add(p2);
// plot controls
SVGLine line12 = new SVGLine(p1, controlStart);
line12.setStroke("red");
svg.appendChild(line12);
SVGLine line21 = new SVGLine(p2, controlEnd);
svg.appendChild(line21);
return coordArray;
}
A Bezier curve requires the data points, along with the slope and curvature at each point. In a graphics program, the slope is set by the slope of the control-line, and the curvature is visualized by the length.
When you don't have such control-lines input by the user, you need to estimate the gradient and curvature at each point. The wikipedia page http://en.wikipedia.org/wiki/Cubic_Hermite_spline, and in particular the 'interpolating a data set' section has a formula that takes these values directly.
Typically, estimating these values from points is done using a finite difference - so you use the values of the points on either side to help estimate. The only choice here is how to deal with the end points where there is only one adjacent point: you can set the curvature to zero, or if the curve is periodic you can 'wrap around' and use the value of the last point.
The wikipedia page I referenced also has other schemes, but most others introduce some other 'free parameter' that you will need to find a way of setting, so in the absence of more information to help you decide how to set other parameters, I'd go for the simple scheme and see if you like the results.
Let me know if the wikipedia article is not clear enough, and I'll knock up some code.
One other point to be aware of: what 'sort' of Bezier interpolation are you after? Most graphics programs do cubic bezier in 2 dimensions (ie you can draw a circle-like curve), but your sample images look like it could be 1d functions approximation (as in for every x there is only one y value). The graphics program type curve is not really mentioned on the page I referenced. The maths involved for converting estimate of slope and curvature into a control vector of the form illustrated on http://en.wikipedia.org/wiki/B%C3%A9zier_curve (Cubic Bezier) would take some working out, but the idea is similar.
Below is a picture and algorithm for a possible scheme, assuming your only input is the three points P1, P2, P3
Construct a line (C1,P1,C2) such that the angles (P3,P1,C1) and (P2,P1,C2) are equal. In a similar fashion construct the other dark-grey lines. The intersections of these dark-grey lines (marked C1, C2 and C3) become the control-points as in the same sense as the images on the Bezier Curve wikipedia site. So each red curve, such as (P3,P1), is a quadratic bezier curve defined by the points (P3, C1, P1). The construction of the red curve is the same as given on the wikipedia site.
However, I notice that the control-vector on the Bezier Curve wikipedia page doesn't seem to match the sort of control vector you are using, so you might have to figure out how to equate the two approaches.
I tried this with quadratic splines instead of cubic ones which simplifies the selection of control points (you just choose the gradient at each point to be a weighted average of the mean gradients of the neighbouring intervals, and then draw tangents to the curve at the data points and stick the control points where those tangents intersect), but I couldn't find a sensible policy for setting the gradients of the end points. So I opted for Lagrange fitting instead:
function lagrange(points) { //points is [ [x1,y1], [x2,y2], ... ]
// See: http://www.codecogs.com/library/maths/approximation/interpolation/lagrange.php
var j,n = points.length;
var p = [];
for (j=0;j<n;j++) {
p[j] = function (x,j) { //have to pass j cos JS is lame at currying
var k, res = 1;
for (k=0;k<n;k++)
res*=( k==j ? points[j][1] : ((x-points[k][0])/(points[j][0]-points[k][0])) );
return res;
}
}
return function(x) {
var i, res = 0;
for (i=0;i<n;i++)
res += p[i](x,i);
return res;
}
}
With that, I just make lots of samples and join them with straight lines.
This is still wrong if your data (like mine) consists of real world measurements. These are subject to random errors and if you use a technique that forces the curve to hit them all precisely, then you can get silly valleys and hills between the points. In cases like these, you should ask yourself what order of polynomial the data should fit and ... well ... that's what I'm about to go figure out.

Calculate distance between two points in bing maps

Ihave a bing map, and a two points :
Point1,Point2 and i want to calculate the distance between these two points? is that possible?
and if i want to put a circle on the two third of the path between point1 and point2 and near point2 ...how can i make it?
Microsoft has a GeoCoordinate.GetDistanceTo Method, which uses the Haversine formula.
For me other implementation return NaN for distances that are too small. I haven't run into any issues with the built in function yet.
See Haversine or even better the Vincenty formula how to solve this problem.
The following code uses haversines way to get the distance:
public double GetDistanceBetweenPoints(double lat1, double long1, double lat2, double long2)
{
double distance = 0;
double dLat = (lat2 - lat1) / 180* Math.PI;
double dLong = (long2 - long1) / 180 * Math.PI;
double a = Math.Sin(dLat / 2) * Math.Sin(dLat / 2)
+ Math.Cos(lat1 / 180* Math.PI) * Math.Cos(lat2 / 180* Math.PI)
* Math.Sin(dLong/2) * Math.Sin(dLong/2);
double c = 2 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1 - a));
//Calculate radius of earth
// For this you can assume any of the two points.
double radiusE = 6378135; // Equatorial radius, in metres
double radiusP = 6356750; // Polar Radius
//Numerator part of function
double nr = Math.Pow(radiusE * radiusP * Math.Cos(lat1 / 180 * Math.PI), 2);
//Denominator part of the function
double dr = Math.Pow(radiusE * Math.Cos(lat1 / 180 * Math.PI), 2)
+ Math.Pow(radiusP * Math.Sin(lat1 / 180 * Math.PI), 2);
double radius = Math.Sqrt(nr / dr);
//Calculate distance in meters.
distance = radius * c;
return distance; // distance in meters
}
You can find a good site with infos here.
You can use a geographic library for (re-)projection and calculation operations if you need more accurate results or want to do some math operations (e.g. transform a circle onto a sperioid/projection). Take a look at DotSpatial or SharpMap and the samples/unittests/sources there... this might help to solve your problem.
Anyway if you know the geodesic distance and bearing you can also calculate where resulting target position (center of your circle) is, e.g. see "Direct Problem" of Vincenty's algorithms. Here are also some useful algorithm implementations for silverlight/.net
You might also consider to post your questions at GIS Stackexchange. They discuss GIS related problems like yours. Take a look at the question for calculating lat long x-miles from point (as you already know the whole distance now) or see the discussion here about distance calculations. This question is related to the problem how to draw a point on a line in a given distance and is nearly the same (cause you need a center and radius).
Another option is to use ArcGIS API for Silverlight which can also display Bing Maps. It is open source and you can learn the things you need there (or just use them, cause they already exists in the SDK). See the Utilities examples tab within the samples.
As I already mentioned: Take a look at this page to get more infos regarding your problem. There you'll find a formula, Javascript code and an Excel sample for calculating a destination point by a given distance and bearing from start point (see headlines there).
It shouldn't be difficult to "transform" the code to your c#-world.

Why won't my raytracer recreate the "mount" scene?

I'm trying to render the "mount" scene from Eric Haines' Standard Procedural Database (SPD), but the refraction part just doesn't want to co-operate. I've tried everything I can think of to fix it.
This one is my render (with Watt's formula):
(source: philosoraptor.co.za)
This is my render using the "normal" formula:
(source: philosoraptor.co.za)
And this one is the correct render:
(source: philosoraptor.co.za)
As you can see, there are only a couple of errors, mostly around the poles of the spheres. This makes me think that refraction, or some precision error is to blame.
Please note that there are actually 4 spheres in the scene, their NFF definitions (s x_coord y_coord z_coord radius) are:
s -0.8 0.8 1.20821 0.17
s -0.661196 0.661196 0.930598 0.17
s -0.749194 0.98961 0.930598 0.17
s -0.98961 0.749194 0.930598 0.17
That is, there is a fourth sphere behind the more obvious three in the foreground. It can be seen in the gap left between these three spheres.
Here is a picture of that fourth sphere alone:
(source: philosoraptor.co.za)
And here is a picture of the first sphere alone:
(source: philosoraptor.co.za)
You'll notice that many of the oddities present in both my version and the correct version is missing. We can conclude that these effects are the result of interactions between the spheres, the question is which interactions?
What am I doing wrong? Below are some of the potential errors I've already considered:
Refraction vector formula.
As far as I can tell, this is correct. It's the same formula used by several websites and I verified the derivation personally. Here's how I calculate it:
double sinI2 = eta * eta * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - sqrt(1.0f - sinI2)));
transmit = transmit.normalise();
I found an alternate formula in 3D Computer Graphics, 3rd Ed by Alan Watt. It gives a closer approximation to the correct image:
double etaSq = eta * eta;
double sinI2 = etaSq * (1.0f - cosI * cosI);
Vector transmit = (v * eta) + (n * (eta * cosI - (sqrt(1.0f - sinI2) / etaSq)));
transmit = transmit.normalise();
The only difference is that I'm dividing by eta^2 at the end.
Total internal reflection.
I tested for this, using the following conditional before the rest of my intersection code:
if (sinI2 <= 1)
Calculation of eta.
I use a stack-like approach for this problem:
/* Entering object. */
if (r.normal.dot(r.dir) < 0)
{
double eta1 = r.iorStack.back();
double eta2 = m.ior;
eta = eta1 / eta2;
r.iorStack.push_back(eta2);
}
/* Exiting object. */
else
{
double eta1 = r.iorStack.back();
r.iorStack.pop_back();
double eta2 = r.iorStack.back();
eta = eta1 / eta2;
}
As you can see, this stores the previous objects that contained this ray in a stack. When exiting the code pops the current IOR off the stack and uses that, along with the IOR under it to compute eta. As far as I know this is the most correct way to do it.
This works for nested transmitting objects. However, it breaks down for intersecting transmitting objects. The problem here is that you need to define the IOR for the intersection independently, which the NFF file format does not do. It's unclear then, what the "correct" course of action is.
Moving the new ray's origin.
The new ray's origin has to be moved slightly along the transmitted path so that it doesn't intersect at the same point as the previous one.
p = r.intersection + transmit * 0.0001f;
p += transmit * 0.01f;
I've tried making this value smaller (0.001f) and (0.0001f) but that makes the spheres appear solid. I guess these values don't move the rays far enough away from the previous intersection point.
EDIT: The problem here was that the reflection code was doing the same thing. So when an object is reflective as well as refractive then the origin of the ray ends up in completely the wrong place.
Amount of ray bounces.
I've artificially limited the amount of ray bounces to 4. I tested raising this limit to 10, but that didn't fix the problem.
Normals.
I'm pretty sure I'm calculating the normals of the spheres correctly. I take the intersection point, subtract the centre of the sphere and divide by the radius.
Just a guess based on doing a image diff (and without reading the rest of your question). The problem looks to me to be the refraction on the back side of the sphere. You might be:
doing it backwards: e.g. reversing (or not reversing) the indexes of refraction.
missing it entirely?
One way to check for this would be to look at the mount through a cube that is almost facing the camera. If the refraction is correct, the picture should be offset slightly but otherwise un-altered. If it's not right, then the picture will seem slightly tilted.
So after more than I year, I finally figured out what was going on here. Clear minds and all that. I was completely off track with the formula. I'm instead using a formula by Heckbert now, which I am sure is correct because I proved it myself using geometry and discrete math.
Here's the correct vector calculation:
double c1 = v.dot(n) * -1;
double c1Sq = pow(c1, 2);
/* Heckbert's formula requires eta to be eta2 / eta1, so I have to flip it here. */
eta = 1 / eta;
double etaSq = pow(eta, 2);
if (etaSq + c1Sq >= 1)
{
Vector transmit = (v / eta) + (n / eta) * (c1 - sqrt(etaSq - 1 + c1Sq));
transmit = transmit.normalise();
...
}
else
{
/* Total internal reflection. */
}
In the code above, eta is eta1 (the IOR of the surface from which the ray is coming) over eta2 (the IOR of the destination surface), v is the incident ray and n is the normal.
There was another problem, which confused the problem some more. I had to flip the normal when exiting an object (which is obvious - I missed it because the other errors were obscuring it).
Lastly, my line of sight algorithm (to determine whether a surface is illuminated by a point light source) was not properly passing through transparent surfaces.
So now my images line up properly :)

Resources