Closest distance between 2 convex hulls - graphics

I have two non-convex meshes and I want to find the closest distance between them. An approximate value is enough for my needs (as long as it does not deviate too much from the true value).
I break the non-convex meshes in many convex parts, and for speed reasons I also find the convex hulls of each convex part.
Then, I test the distances for all combinations between the hulls of the first and the second mesh. The shortest one defines the closest distance between the two meshes.
I already have a working solution with CGAL's Optimal Distances package (see image below). The result is nice, but the runtime is not ideal, actually it is the main bottleneck in my pipeline.
It makes sense that this problem is resource intensive, but it would be really nice to have a faster alternative with CGAL or another library or approach that gives a similar result. Unfortunately I have not found any alternative up to now.
Update:
The link provided above pinpoints to the CGAL-example that I'm following, namely the "polytope_distance_d_fast_exact.cpp" example. Regarding the used kernel, I use:
// use an inexact kernel...
typedef CGAL::Homogeneous<double> K;
typedef K::Point_3 Point;
// ... and the EXACT traits class based on the inexcat kernel
typedef CGAL::Polytope_distance_d_traits_3<K, ET, double> Traits;
typedef CGAL::Polytope_distance_d<Traits> Polytope_distance;

It’s a bit of an older library - and it’s been a few years since I used it - but I seem to remember trying to do something similar with the GNU Triangulated Surface library...
In particular, using the function gts_surface_distance to find the distance between two GtsSurfaces (which I think your non-convex meshes can still be represented as).
See here for more info.
I'm afraid I have no idea if it might be faster for you - but perhaps worth a shot!

Related

How would I construct an integer optimization model corresponding to a graph

Suppose we're given some sort of graph where the feasible region of our optimization problem is given. For example: here is an image
How would I go on about constructing these constraints in an integer optimization problem? Anyone got any tips? Thanks!
Mate, I agree with the others that you should be a little more specific than that paint-ish picture ;). In particular you are neither specifying any objective/objective direction nor are you giving any context, what about this graph should be integer-variable related, except for the existence of disjunctive feasible sets, which may be modeled by MIP-techniques. It seems like your problem is formalization of what you conceptualized. However, in case you are just being lazy and are just interested in modelling disjunctive regions, you should be looking into disjunctive programming techniques, such as "big-M" (Note: big-M reformulations can be problematic). You should be aiming at some convex-hull reformulation if you can attain one (fairly easily).
Back to your picture, it is quite clear that you have a problem in two real dimensions (let's say in R^2), where the constraints bounding the feasible set are linear (the lines making up the feasible polygons).
So you know that you have two dimensions and need two real continuous variables, say x[1] and x[2], to formulate each of your linear constraints (a[i,1]*x[1]+a[i,2]<=rhs[i] for some index i corresponding to the number of lines in your graph). Additionally your variables seem to be constrained to the first orthant so x[1]>=0 and x[2]>=0 should hold. Now, to add disjunctions you want some constraints that only hold when a certain condition is true. Therefore, you can add two binary decision variables, say y[1],y[2] and an additional constraint y[1]+y[2]=1, to tell that only one set of constraints can be active at the same time. You should be able to implement this with the help of big-M by reformulating the constraints as follows:
If you bound things from above with your line:
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[1]) if i corresponds to the one polygon,
a[i,1]*x[1]+a[i,2]-rhs[i]<=M*(1-y[2]) if i corresponds to the other polygon,
and if your line bounds things from below:
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the one polygon,
-M*(1-y[1])<=-a[i,1]*x[1]-a[i,2]+rhs[i] if i corresponds to the other polygon.
It is important that M is sufficiently large, but not too large to cause numerical issues.
That being said, I am by no means an expert on these disjunctive programming techniques, so feel free to chime in, add corrections or make things clearer.
Also, a more elaborate question typically yields more elaborate and satisfying answers ;) If you had gone to the effort of making up a true small example problem you likely would have gotten a full formulation of your problem or even an executable piece of code in no time.

Ruler and Compass construction in CGAL

I am trying to make basic constructions like "get intersection of a line and circle", "connect two points", "create a circle" with CGAL. However, the choice of kernel seems to be a problem. When I use exact 2D circular kernel, I get Circular_arc_point_2 from intersections, and then I can't use this data type to create lines, circles; and converting to Point_2 seems to introduce error and then stores the approximate value as an exact number. Additionally this problem seems to be independent of the choice of kernel.
What is a proper way of those constructions? Exact and approximate number are all fine as long as the data type is consistent in these constructions.
In the worst case, if this is unresolvable, is there any other free library with this functionality?
The predefined Exact_circular_kernel_2 only uses rational number as its field type. To cover every constructible point, you should define a circular kernel that uses a FieldWithSqrt. With existing types and traits it is simple:
using L = CGAL::Exact_predicates_exact_constructions_kernel_with_sqrt;
using A = CGAL::Algebraic_kernel_for_circles_2_2<L::FT>;
using K = CGAL::Circular_kernel_2<L, A>;
Then you can convert a Circular_arc_point_2 p to Point_2 with the exact coordinates:
K::Point_2 q(p.x(), p.y());
Circular_arc_point_2 is a point which coordinates are algebraic numbers of degree 2 (only way to represent exactly the intersection of 2 circles). You can convert the point into regular floating point coordinate Point_2 by using for example Point_2(to_double(cp.x()), to_double(cp.y())) but then you'll be loosing the exactness.

Trigonometric Functions in Pseudo Code

I'm searching for Trigonometric Functions in Pseudo Code.
I'm Not good at mathematics, so I can't do much with the formulas in the Wikipedia.
Mainly I'm searching for Sine, Cosine, Tangent and the inverse functions (sin⁻¹, cos⁻¹, tan⁻¹) of them.
There are also other Trigonometric Functions. But for me the above are the most important.
If it is possible, I would be happy if in the pseudo code only variables, for, if, and operators (+, -, *, /, %, sqrt()) are used, because I do not have a library with advanced mathematics functions.
Trigonometry functions are Transcendental.
You cannot find an exact expression of them in term of polynomial algebra.
You can approximate them though.
The usual approach is to use periodicity and symmetry to reduce an angle α into and equivalent angle α′ such that sin(α) = sin(α′) but α′ ≪ α.
Simply put you reduce any angle into and angle in the first quadrant or similar, this is easier than it looks.
Once you have a small angle, you can use Taylor Series Expansion to compute the function up to a fixed error magnitude.
Here is a tutorial page.
Another approach is to use a lookup table.
This is especially useful when you can keep track of the required precision of the process and is very fast.
However it takes more memory and may give rise to a step-looking function.
Here an introductory page.
Another approach is to use CORDIC Algorithm, this is specially suited for hardware that lacks multiplication support (like some MIPS and ARM chips).
From Wikipedia:
CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller) [...]
On the other hand, when a hardware multiplier is available (e.g., in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC.

Transparency in Progressive Photon Mapping in cuda

I am working on a project, which is based on optix. I need to use progressive photon mapping, hence I am trying to use the Progressive Photon Mapping from the samples, but the transparency material is not implemented.
I've googled a lot and also tried to understand other samples that contains transparency material (e.g. Glass, Tutorial, whitted). At last, I got the solution as follows;
Find the hit point (intersection point) (h below)
Generate another ray from that point
use the color of the new generated points
By following you can also find the code of that part, by I do not understand why I get black color(.0f, .0f, 0.f) for the new generated ray (part 3 above).
optix::Ray ray( h, t, rtpass_ray_type, scene_epsilon );
HitPRD refr_prd;
refr_prd.ray_depth = hit_prd.ray_depth+1;
refr_prd.importance = importance;
rtTrace( top_object, ray, refr_prd );
result += (1.0f - reflection) * refraction_color * refr_prd.attenuation;
Any idea will be appreciated.
Please note that refr_prd.attenuation should contains some colors, after using function rtTrace(). I've mentioned reflection and reflaction_color to help you better understand the procedure. You can simply ignore them.
There are a number of methods to diagnose your problem.
Isolate the contribution of the refracted ray, by removing any contribution of the reflection ray.
Make sure you have a miss program. HitPRD::attenuation needs to be written to by all of your closest hit programs and your miss programs. If you suspect the miss program is being called set your miss color to something obviously bad ([1,0,1] is my favorite).
Use rtPrintf in combination with rtContextSetPrintLaunchIndex or setPrintLaunchIndex to print out the individual values of the product to see which term is zero from a given pixel. If you don't restrict the output to a given launch index you will get too much output. You probably also want to print out the depth as well.

Dynamic Programming: top down versus bottom up comparison

Can you point me to some dynamic programming problem statements where bottom up is more beneficial than top down? (i.e. simple DP works more naturally but memoization would be harder to implement?)
I find recursion with memoization much easier, and want to solve problems where bottom up is a better/perhaps only feasible approach.
I understand that theoretically both are equivalent, so even something like ease of implementation would count as a benefit.
You will apply bottom up with memoization OR top down recursion with memoization depending on the problem at hand .
For example, if you have to find the minimum weight independent path of a path graph, you will use the bottom up approach as you have to solve all the subproblems that are possible.
But if you have to solve the knapsack problem , you may want to use recursive top down with memoization as you have to solve a limited number of subproblems. Approaching the knapsack problem bottom up will cause the algo to solve a lot of redundant problems that are not used in the original subproblem.
Two things to consider when deciding which algorithm to use
Time Complexity. Both approaches have the same time complexity in general, but because for loop is cheaper than recursive function calls, bottom-up can be faster if measured in machine time.
Space Complexity. (without considering extra call stack allocations during top-down) Usually both approaches need to build a table for all sub-solutions, but bottom-up is following a topological order, its cost of auxiliary space can be sometimes reduced to the size of problem's immediate dependencies. For example: fibonacci(n) = fibonacci(n-1) + fibonacci(n-2), we only need to store the past two calculations
That being said, bottom-up is not always the best choice, I will try to illustrate with examples:
(mentioned by #Nikunj Banka) top-down only solves sub-problems used by your solution whereas bottom-up might waste time on redundant sub-problems. A silly example would be 0-1 knapsack with 1 item...run time difference is O(1) vs O(weight)
you might need to perform extra work to get topological order for bottm-up. In Longest Increasing Path in Matrix, if we want to do sub-problems after their dependencies, we would have to sort all entries of the matrix in descending order, that's extra nmlog(nm) pre-processing time before DP

Resources