I would like to know how to calculate the volume of a solid object that is represented by Boundary representation (B-Rep)? Any hints? Thanks in advance
You can use Gauss' theorem to turn the volume integral into a surface integral. You probably remember Stoke's theorem and Green's theorem from calculus. It's basically the same idea. It is fairly easy to evaluate if you can turn your B-Rep into a set of planar polygons. You can find some details here and here.
This isn't going to turn out well if your surface is non-manifold, but the problem isn't well defined in that case anyways.
Related
Suppose we're given some sort of graph where the feasible region of our optimization problem is given. For example: here is an image
How would I go on about constructing these constraints in an integer optimization problem? Anyone got any tips? Thanks!
Mate, I agree with the others that you should be a little more specific than that paint-ish picture ;). In particular you are neither specifying any objective/objective direction nor are you giving any context, what about this graph should be integer-variable related, except for the existence of disjunctive feasible sets, which may be modeled by MIP-techniques. It seems like your problem is formalization of what you conceptualized. However, in case you are just being lazy and are just interested in modelling disjunctive regions, you should be looking into disjunctive programming techniques, such as "big-M" (Note: big-M reformulations can be problematic). You should be aiming at some convex-hull reformulation if you can attain one (fairly easily).
Back to your picture, it is quite clear that you have a problem in two real dimensions (let's say in R^2), where the constraints bounding the feasible set are linear (the lines making up the feasible polygons).
So you know that you have two dimensions and need two real continuous variables, say x[1] and x[2], to formulate each of your linear constraints (a[i,1]*x[1]+a[i,2]<=rhs[i] for some index i corresponding to the number of lines in your graph). Additionally your variables seem to be constrained to the first orthant so x[1]>=0 and x[2]>=0 should hold. Now, to add disjunctions you want some constraints that only hold when a certain condition is true. Therefore, you can add two binary decision variables, say y[1],y[2] and an additional constraint y[1]+y[2]=1, to tell that only one set of constraints can be active at the same time. You should be able to implement this with the help of big-M by reformulating the constraints as follows:
If you bound things from above with your line:
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[1]) if i corresponds to the one polygon,
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[2]) if i corresponds to the other polygon,
and if your line bounds things from below:
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the one polygon,
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the other polygon.
It is important that M is sufficiently large, but not too large to cause numerical issues.
That being said, I am by no means an expert on these disjunctive programming techniques, so feel free to chime in, add corrections or make things clearer.
Also, a more elaborate question typically yields more elaborate and satisfying answers ;) If you had gone to the effort of making up a true small example problem you likely would have gotten a full formulation of your problem or even an executable piece of code in no time.
In Postgis there are two very similar functions. One is st_isValid, the other one is st_isSimple. I'd like to understand the difference between both for Polygons. For the st_isValid we have:
Some of the rules of polygon validity feel obvious, and others feel arbitrary (and in fact, are arbitrary).
Polygon rings must close.
Rings that define holes should be inside rings that define exterior boundaries.
Rings may not self-intersect (they may neither touch nor cross one another).
Rings may not touch other rings, except at a point.
For the st_isSimple we've got:
Returns true if this Geometry has no anomalous geometric points, such as self intersection or self tangency. For more information on the OGC's definition of geometry simplicity and validity, refer to "Ensuring OpenGIS compliancy of geometries"
Does it mean that any valid polygon is automatically simple?
Both functions check for similar OGC definition compliancy of geometries, but are defined for different geometries (by dimension);
By OGC definition
a [Multi]LineString can (should) be simple
a [Multi]Polygon can (should) be valid
This implies that
a simple [Multi]LineString is always considered valid
a valid [Multi]Polygon is always considered simple (as in, it must have at least one simple closed LineString ring)
thus the answer is yes.
Strictly speaking, using the inherent checks of the OGC defined functionality on the 'wrong' geometry type is useless.
PostGIS, however, liberally extends the functionality of ST_IsValid to use the correct checks for all geometry types.
I'm searching for Trigonometric Functions in Pseudo Code.
I'm Not good at mathematics, so I can't do much with the formulas in the Wikipedia.
Mainly I'm searching for Sine, Cosine, Tangent and the inverse functions (sin⁻¹, cos⁻¹, tan⁻¹) of them.
There are also other Trigonometric Functions. But for me the above are the most important.
If it is possible, I would be happy if in the pseudo code only variables, for, if, and operators (+, -, *, /, %, sqrt()) are used, because I do not have a library with advanced mathematics functions.
Trigonometry functions are Transcendental.
You cannot find an exact expression of them in term of polynomial algebra.
You can approximate them though.
The usual approach is to use periodicity and symmetry to reduce an angle α into and equivalent angle α′ such that sin(α) = sin(α′) but α′ ≪ α.
Simply put you reduce any angle into and angle in the first quadrant or similar, this is easier than it looks.
Once you have a small angle, you can use Taylor Series Expansion to compute the function up to a fixed error magnitude.
Here is a tutorial page.
Another approach is to use a lookup table.
This is especially useful when you can keep track of the required precision of the process and is very fast.
However it takes more memory and may give rise to a step-looking function.
Here an introductory page.
Another approach is to use CORDIC Algorithm, this is specially suited for hardware that lacks multiplication support (like some MIPS and ARM chips).
From Wikipedia:
CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller) [...]
On the other hand, when a hardware multiplier is available (e.g., in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC.
How would one define a type for dimensions?
Can you define a type in terms of another type? (i.e. an inch is 72 PostScript points).
Would it even make sense to make a new type for a dimension unit?
I've seen libraries for other kind of units, but the dimensions I'd be interested in are:
scaled point (smallest, maybe Int?), point (65536 scaled points), pica (12 points), etc.
I think this is where phantom types can help. The dimensional package is a good place to start to understand them. The code is literate Haskell and very readable so I'd recommend reading through that.
So I see graphical models expressed in plate notation in research papers and online all the time (for example: http://www.cs.princeton.edu/~blei/papers/BleiNgJordan2003.pdf).
Is there a quick and easy way to produce these?? I've searched and searched but all I've found are solutions like GraphViz which are really way more powerful than what I need (and hence much more difficult to use). PGF/Tikz seems like my best bet, but again it seems like overkill.
Maybe my best bet is to just produce them in Inkscape, or bite the bullet and learn PGF/Tikz. They're just so popular that I thought there would be a simpler way to churn them out, but maybe not... TIA.
GraphViz really isn't that hard to learn. The basic language is really simple for these kinds of graphs. It took me just a few moments to replicate (more or less) the first example from that pdf, and the nice thing about it is that, due to it's simplicity, it's quite easy to generate graphs procedurally from some other data source.
Digraph fig1 {
rankdir = LR; //order things from left to right
//define alpha and beta as existing
α [shape=circle];
β [shape=circle];
//not strictly nescessary but helps if you want to
//assign them specific shapes or colours
subgraph cluster_M //names beginning with "cluster" get a box drawn, an odd hack
{
label = "M"
θ [shape=circle];
subgraph cluster_N
{
label = "N"
z [shape=circle];
w [shape=circle, style=filled]
z->w; //quite literally z points at w
}
θ -> z;
}
α -> θ;
β -> w;
}
compiled with
dot -Tpng input.txt -o graph.png
it comes out looking like this. If having the labels below the bubbles was important, you could do that with a couple of extra lines, similarly if specific placement of nodes is important you can adjust that too. In fact, if you don't specify an image format, the default behaviour of dot is to output a version of the input file with co-ordinates for the position of each element.
Here is a more refined fork of Dietz's scripts: https://github.com/jluttine/tikz-bayesnet
Check out the excellent Tikz-package by Laura Dietz, available from http://www.mpi-inf.mpg.de/~dietz/probabilistic-models-tikz.zip. A pdf with some examples is available at http://www.mpi-inf.mpg.de/~dietz/probabilistic-models-tikz.pdf.
I really like GLE (Graphics Layout Engine). It's what Christopher Bishop used in his book, "Pattern Recognition and Machine Learning". It has a simple syntax with variables, loops, and functions, and it supports TeX equations. Results output as either pdf or eps and look very nice.
Lots of examples are available, including this Bayes net from PRML.
As a complement to other answers: a "low-skills" approach I've used is to draw them in Google Slides, with some add-on for producing the formulas.