I would like to implement a manifold alignment algorithm with sklearn.manifold.LocallyLinearEmbedding by using my custom way of defining neighbors, while the neighbors search is limited to neighbors_algorithm{‘auto’, ‘brute’, ‘kd_tree’, ‘ball_tree’}, default=’auto’. Should I subclass the original module to achieve that, or is it any quicker way?
Related
I was wondering if there was a direct way of computing the iteration matrix for nth Linear Block Gauss Seidel iteration within OpenMDAO?
thank you
If I understand you correctly, you are referring to the matrix-form of the Gauss Seidel algorithm where you take Ax=b, and break A up into the Diagonal (D), Lower (L) and Upper (U) parts, then use those parts to compute the next iterate.
Specifically you compute [D-L]^-1. This, I believe is what you are referring to as the "iteration matrix" (I am not familiar with this terminology, but based on the algorithm I'm comfortable making an educated guess).
This formulation of the algorithm is useful to think about and a simple way to implement it, but OpenMDAO takes a different approach. The LBGS algorithm implemented in OpenMDAO is set up to work in a matrix-free manner. That means it only interacts with the linear operator methods solve_linear and apply_linear and never explicitly assembles the A matrix at all. Hence there isn't an opportunity to split A up into D, L, U.
Depending on the way you constructed the model, the A matrix you would need might or might not be there at all because OpenMDAO is capable of working in a completely matrix free context. However, if all of your components use the compute_partials or linearize methods to provide partial derivatives then the data you would need for the A matrix does exist in memory.
You'll have to dig for it a bit, and ironically the best place to see how to do that is in the direct solver which does actually require the matrix be formed to compute a factorization.
Also, in that code you'll see a function can iteratively call the linear operator to construct a dense matrix even if the underlying components don't provide their partials directly. Please note that this approach for assembling the matrix is extremely slow and is not recommended for normal operations.
I am very new to java and using ELKI. I have three dimensional objects have information about their uncertainty ( a multivariate gaussian). I would like to use FDBSCAN to cluster my data. I am wondering if it is possible to do this in ELKI using the UncertainiObject class. However, I am not sure how to do this.
Any help or pointers to examples will be very useful.
Yes, you can use, e.g., SimpleGaussianContinuousUncertainObject to model uncertain data with Gaussian uncertainty. But if you want a full multivariate Gaussian, you will have to modify its source code. It is not a very complicated class.
Many of the algorithms assume you can put a bounding box around uncertain objects, in order to prune the search space (otherwise, you will always be in O(n^2)). This is more difficult with rotated Gaussians!
The key difficulty with using all of these is actually data input. There is no standard file format for specifying objects with uncertainty. Apparently, most people that work with uncertain data just use certain data, and add an artificial uncertainty to it. But even that needs a lot of parameters to tune, and I am not convinced by this approach.
I have a 2D-delaunay-triangulation where each vertex is labeled with an elevation. I now want to remove vertices from the triangulation without making big changes to the form (analogous to douglas-peucker for polylines).
There are a lot of mesh-coarsening algorithms for 3D-meshes. But isn't there something simpler for my task?
Do not remove points from your existing model. Instead construct a second one. Start with a few convex hull points and then refine the new model in a divide and conquer style until comparison with the original model yields that the specified error bound is kept. I have implemented it like that in the Fade library and it works well. You can try my 2.5D Douglas-Peucker implementation if you want, the student license is free.
But best possible output quality requires also that feature lines are detected, simplified and conserved. This is more involved, I work on that topic and hope that I can provide results soon.
I want to extract the amount of transformation, rotation and scale between a template image and a source image. I want to use template matching but I don't know how to extract transformation, rotation and scale amounts. Could someone help me ?
The problem you posed can be addressed in many ways but it doesn't look like template matching is the right solution.
One way of solving it could be to use SIFT to compute the keypoints in each image and after that you could find the consensus of features between the two pictures. Once you have the matches, you can calculate the homography mapping between the two pairs of point sets. One example is shown below with a card. Notice that you need to handle wrong matches, but there are algorithms for that. You can find an example of SIFT with OpenCV here.
A more complex way of handling that would be to perform a point-set registration. There is a very good algorithm called CPD which given two point-sets, it calculates the correspondence between points and estimate the transformation in a dual step optimization (Expectation Maximization). CPD can assume different types of transformations, such as rigid, affine, and non-rigid. CPD was written in Matlab with C via mex.
How to quickly "pick" a 2d element in large number of vector graphics elements, such as polylines, polygons, curves etc.
In Qt, QGraphics can do this easily, but In my program, I don't need this class, I just need QPaint and QWidget.I want to manage and render these elements data myself.
So..
Which related graphics knowledge I need to search in google?, BSP-tree?R-tree?
Give me some advice, Thanks!
Seems that an R-tree is more designed for picking than a BSP-tree. According to the wikipedia article on Spatial Indexing, R-tree is
Typically the preferred method for
indexing spatial data. Objects
(shapes, lines and points) are grouped
using the minimum bounding rectangle
(MBR). Objects are added to an MBR
within the index that will lead to the
smallest increase in its size.
But are you sure it's worth your while to implement the creation, maintenance, and use of the R-tree rather than using QGraphics?