Geodesic computation on triangle meshes? [closed] - graphics

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 months ago.
Improve this question
I am trying to find the distance between two points on a triangulated surface (geodesic distance). It looks like a basic operation and is not trivial. So I am wondering if there are any libraries that do this? My google fo failed, so I would greatly appreciate any pointers.
(I am aware of CGAL, scipy.spatial, but I couldn't find anything in the docs, let me know if I missed something there)

There are many implementation for computing geodesic distance on triangle mesh. Some are approximate and some are exact.
1.Fast Marching method. This method is approximate and in practice the average error is below 1%. You can refer to Gabriel Peyre's implementation of fast marching method in matlab.
http://www.mathworks.com/matlabcentral/fileexchange/6110-toolbox-fast-marching
MMP method proposed by [1] and implemented in [2]. This method is exact and the code is in https://code.google.com/p/geodesic/ . Same as the comment by Ante. A disadvantage is that when the mesh is larege, MMP method will consume a lot of memory, O(n^2), n is the number of vertices.
CH method proposed in [3] and improved and implemented in [4]. This method is exact and consume less memory than MMP method. The code is in https://sites.google.com/site/xinshiqing/knowledge-share
Heat method proposed in [5]. One implementation is in https://github.com/dgpdec/course
This method is approximated and require a preprocessing process. It's faster than Fast Marching method.
[1] Joseph S. B. Mitchell, David M. Mount, and Christos H. Papadimitriou. 1987. The discrete geodesic problem. SIAM J. Comput. 16, 4 (August 1987), 647-668.
[2] Vitaly Surazhsky, Tatiana Surazhsky, Danil Kirsanov, Steven Gortler, Hugues Hoppe. Fast exact and approximate geodesics on meshes. ACM Trans. Graphics (SIGGRAPH), 24(3), 2005.
[3] Chen, J. and Han, Y.1990. Shortest paths on a polyhedron. InSCG '90: Proceedings of the sixth annual symposium on Computational geometry. ACM Press, New York, NY, USA, 360{369
[4] Shi-Qing Xin and Guo-Jin Wang. 2009. Improving Chen and Han's algorithm on the discrete geodesic problem. ACM Trans. Graph. 28, 4, Article 104 (September 2009), 8 pages.
[5] Crane K, Weischedel C, Wardetzky M. Geodesics in heat: a new approach to computing distance based on heat flow[J]. ACM Transactions on Graphics (TOG), 2013, 32(5): 152.

Just to add to the previous answer by wxnfifth that Fast marching method can be applied not alone, but as the first step to receive a good approximation of geodesic path, which is iteratively improved as follows:
Compose the strip of triangles containing existing approximation of the path.
Find the shortest path in the strip, which is a task that can be solved exactly in a linear time, for example by Shortest Paths in Polygons method by Wolfgang Mulzer.
If that path passes via a vertex on the boundary of triangle strip then the path along the other side of the vertex is considered, and if it is shorter then the strip is updated and the algorithm is restarted from step 2.
As to the libraries, where it is implemented, one can consider open-source MeshLib and specifically the function computeSurfacePath. And there is even a short video showing its work on some sample mesh.

Related

Why is Standard Deviation the square of difference of an obsevation from the mean?

I am learning statistics, and have some basic yet core questions on SD:
s = sample size
n = total number of observations
xi = ith observation
μ = arithmetic mean of all observations
σ = the usual definition of SD, i.e. ((1/(n-1))*sum([(xi-μ)**2 for xi in s])**(1/2) in Python lingo
f = frequency of an observation value
I do understand that (1/n)*sum([xi-μ for xi in s]) would be useless (= 0), but would not (1/n)*sum([abs(xi-μ) for xi in s]) have been a measure of variation?
Why stop at power of 1 or 2? Would ((1/(n-1))*sum([abs((xi-μ)**3) for xi in s])**(1/3) or ((1/(n-1))*sum([(xi-μ)**4 for xi in s])**(1/4) and so on have made any sense?
My notion of squaring is that it 'amplifies' the measure of variation from the arithmetic mean while the simple absolute difference is somewhat a linear scale notionally. Would it not amplify it even more if I cubed it (and made absolute value of course) or quad it?
I do agree computationally cubes and quads would have been more expensive. But with the same argument, the absolute values would have been less expensive... So why squares?
Why is the Normal Distribution like it is, i.e. f = (1/(σ*math.sqrt(2*pi)))*e**((-1/2)*((xi-μ)/σ))?
What impact would it have on the normal distribution formula above if I calculated SD as described in (1) and (2) above?
Is it only a matter of our 'getting used to the squares', it could well have been linear, cubed or quad, and we would have trained our minds likewise?
(I may not have been 100% accurate in my number of opening and closing brackets above, but you will get the idea.)
So, if you are looking for an index of dispersion, you actually don't have to use the standard deviation. You can indeed report mean absolute deviation, the summary statistic you suggested. You merely need to be aware of how each summary statistic behaves, for example the SD assigns more weight to outlying variables. You should also consider how each one can be interpreted. For example, with a normal distribution, we know how much of the distribution lies between ±2SD from the mean. For some discussion of mean absolute deviation (and other measures of average absolute deviation, such as the median average deviation) and their uses see here.
Beyond its use as a measure of spread though, SD is related to variance and this is related to some of the other reasons it's popular, because the variance has some nice mathematical properties. A mathematician or statistician would be able to provide a more informed answer here, but squared difference is a smooth function and is differentiable everywhere, allowing one to analytically identify a minimum, which helps when fitting functions to data using least squares estimation. For more detail and for a comparison with least absolute deviations see here. Another major area where variance shines is that it can be easily decomposed and summed, which is useful for example in ANOVA and regression models generally. See here for a discussion.
As to your questions about raising to higher powers, they actually do have uses in statistics! In general, the mean (which is related to average absolute mean), the variance (related to standard deviation), skewness (related to the third power) and kurtosis (related to the fourth power) are all related to the moments of a distribution. Taking differences raised to those powers and standardizing them provides useful information about the shape of a distribution. The video I linked provides some easy intuition.
For some other answers and a larger discussion of why SD is so popular, See here.
Regarding the relationship of sigma and the normal distribution, sigma is simply a parameter that stretches the standard normal distribution, just like the mean changes its location. This is simply a result of the way the standard normal distribution (a normal distribution with mean=0 and SD=variance=1) is mathematically defined, and note that all normal distributions can be derived from the standard normal distribution. This answer illustrates this. Now, you can parameterize a normal distribution in other ways as well, but I believe you do need to provide sigma, whether using the SD or precisions. I don't think you can even parametrize a normal distribution using just the mean and the mean absolute difference. Now, a deeper question is why normal distributions are so incredibly useful in representing widely different phenomena and crop up everywhere. I think this is related to the Central Limit Theorem, but I do not understand the proofs of the theorem well enough to comment further.

How to use ortools to solve quadratic programming in Python?

For example, how can I simply find the minimum of (x-1)^2 via ortools in Python?
I read the document of ortools, but I cannot find it. I knew it does not belong to linear optimization, but I cannot find a proper type in its document.
Google OR-Tools does not support quadratic programming. This page contains a list of what it supports:
Google Optimization Tools (OR-Tools) is a fast and portable software suite for solving combinatorial optimization problems. The suite contains:
A constraint programming solver.
A simple and unified interface to several linear programming and mixed integer programming solvers, including CBC, CLP, GLOP, GLPK, Gurobi, CPLEX, and SCIP.
Graph algorithms (shortest paths, min cost flow, max flow, linear sum assignment).
Algorithms for the Traveling Salesman Problem and Vehicle Routing Problem.
Bin packing and knapsack algorithms.
The following link clarifies that the mixed integer programming (MIP) support does not include quadratic MIP (MIQP):
https://github.com/google/or-tools/issues/598
You might check out this resource for ideas of how to do QP in Python:
https://scaron.info/blog/quadratic-programming-in-python.html

How to know one system is siginficantly better than another one?

I am studying lexical semantics. I have 65 pairs of synonyms with their sense relatedness. The dataset is derived from the paper:
Rubenstein, Herbert, and John B. Goodenough. "Contextual correlates of synonymy." Communications of the ACM 8.10 (1965): 627-633.
I extract sentences containing those synonyms, transfer the neighbouring words appearing in those sentences to vectors, calculate the cosine distance between different vectors, and finally get the Pearson correlation between the distances we calculate and the sense relatedness given by Rubenstein and Goodenough
I get the Pearson correlation for Method 1 is 0.79, and for Method 2 is 0.78, for example. How do I measure Method 1 is significantly better than Method 2 or not?
Well Strictly not a programming question, but since this question is unanswered in others stackexchange sites, i'll tell the approach i would take.
I would say there are other benchmarks to check your approaches on similar tasks. You can check how your method performs on those benchmarks and analyze the results. Some methods may capture similarity more while others relatedness and some both.
This is the link WordVec Demo which automatically scores your vectors and provides you the results.

Computational geometry algorithm which can deal with conic arc segments

I've just finished reading a book named "Computational Geometry Algorithms and Applications". The algorithm introduced in this book is very helpfull for my future work.
But algorithm in this book only concerned about straight line segments. what i want to known is the same algorithm that can deal with both straight lines and conic arcs.
Such as find intersections of mixed line segments and conic arcs; offset polygon with conic arcs; find convex hull of concave polygon with conic arc edge...
3rd party libs, like CGAL can deal with problems like this, but i want to known the details of the algorithm. what's book or materials should i refer to?
In general, computational geometry with curved arcs is more complicated and less explored. But not unexplored, and often similar techniques suffice. One place to look is CGAL, as you know; and LEDA, especially here:
(Added): In response to the request for literature references, you could start with the paper below, and search backward in time via its references, and forward in time via Google Scholar (which reports it is cited by 79 papers):
Eric Berberich, Arno Eigenwillig, Michael Hemmer, Susan Hert, Kurt Mehlhorn, Elmar Schömer
"A Computational Basis for Conic Arcs and Boolean Operations on Conic Polygons."
Lecture Notes in Computer Science Volume 2461, 2002, pp 174-186.
(Springer link)

Spatial geometry for augmented reality applications [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
Does anyone know any good book or web resource for geometric and mathematical fundamentals of augmented reality?
Thanks!
Here's a good library for Augmented Reality:
ARToolKit
Ports to various platforms:
NyARToolKit
A simple but still impressive sample application using this library:
Project Marble
A great read is Chapter 10 of the Black Art of 3d Game Programming. All the AR/3D maths you'll ever need is there.
Once you've mastered this stuff, you'll be ready for 3d spatial projections etc, for AR/Target tracking.
I can't point any specific book right now, but depending on your math background I'd suggest go in this order
Vector and Linear algebra, intermediate level, up to matrix operation, LU decomposition, cross product.
Projective geometry, up to homogenious coordinates, planar homography
3d graphics, viewing and projection matrix, frustum
Basics of image processing, thresholds, edge detection, line detection
After those 4 two you can understand rectangular marker tracking
Calculus of many variables, Fourier transform, DFT
Least squares method
Intermediate linear algebra, eigenvalues, eigenvectors, SVD
Advanced numerical methods, nonlinear least-squares, Gauss-Newton, Levenberg-Marquardt
Advanced image processing, blob detection SIFT/SURF/FAST
Intermediate projective geometry: Essential and fundamental matrices, epipolar geometry
Bundle adjustment
After that you can understand markerless tracking
And some more advanced math which is used in cutting edge AR:
Understanding of basics of Lie groups and algebras
Statistics, robust estimators
Quaternions
Kalman filters
Clifford algebras (Geometric algebra) - generalization of quaternions
Wavelets
Advanced projective geometry (like trifocal tensor, 5-point algorithm)
I'd recommend the following two books. Both are pricey but contain lots of really useful stuff in Projective Geometry which is what you need to know.
It's hard going though so unless you really want to understand the maths behind it you may want to use a third party library as suggested above.
Multiple View Geometry in Computer Vision by Hartkey and Zisserman
and
Three Dimensional Computer Vision: A Geometric Viewpoint by Faugeras

Resources