I have a question regarding the way to partition the spaces in kd-tree algorithm.
Assuming I have points in the plane, with (x,y) coordinate. Assuming we're not in a particular situation when points are in the same line. I was thinking why we need to alternate the splitting coordinate, at one level, use x axis, the following level, use y axis. What matters if we use only x direction to split spaces, we always have a binary tree, and search algorithm always take log(n) in average (assuming we have relatively well balanced tree).
What give me more when I split space by alternating splitting directions? I wonder if it's related to some general probabilistic properties in multi-dimension?
I think the problem comes when you start searching the tree, for example with a window query (rectangular query).
Lets assume a rectangular dataset with evenly distributed points between -1,000 and 1,000 in every direction.
If you sort by x, then a query that should return all point with (-900 < x < 900) && (1 < y < 10) may have to scan almost the whole tree.
The log(n) search would only work if your query only limits x and not y, i.e. (1<x<10) && (-inf<y<+inf).
Related
Suppose I have a series of (imperfect) azimuth readouts, giving me vague angles between a number of points. Lines projected from points A, B, C obviously [-don't-always-] never converge in a single point to define the location of point D. Hence, angles as viewed from A, B and C need to be adjusted.
To make it more fun, I might be more certain of the relative positions of specific points (suppose I locate them on a satellite image, or I know for a fact they are oriented perfectly north-south), so I might want to use that certainty in my calculations and NOT adjust certain angles at all.
By what technique should I average the resulting coordinates, to achieve a "mostly accurate" overall shape?
I considered treating the difference between non-adjusted and adjusted angles as "tension" and trying to "relieve" it in subsequent passes, but that approach gives priority to points calculated earlier.
Another approach could be to calculate the total "tension" in the set, then shake all angles by a random amount, see if that resulted in less tension, and repeat for possibly improved results, trying to evolve a possibly better solution.
As I understand it you have a bunch of unknown points (p[] say) and a number of measurements of azimuths, say Az[i,j] of p[j] from p[i]. You want to find the coordinates of the points.
You'll need to fix one point. This is because if the values of p[] is a solution -- i.e. gave the measured azimuths -- so too is q[] where for some fixed x,
q[i] = p[i] + x
I'll suppose you fix p[0].
You'll also need to fix a distance. This is because if p[] is a solution, so too is q[] where now for some fixed s,
q[i] = p[0] + s*(p[i] - p[0])
I'll suppose you fix dist(p[0], p[1]), and that there is and azimuth Az[1,2]. You'd be best to choose p[0] p[1] so that there is a reliable azimuth between them. Then we can compute p[1].
The usual way to approach such problems is least squares. That is we seek p[] to minimise
Sum square( (Az[i,j] - Azimuth( p[i], p[j]))/S[i,j])
where Az[i,j] is your measurement data
Azimuth( r, s) is the function that gives the azimuth of the point s from the point r
S[i,j] is the 'sd' of the measurement A[i,j] -- the higher the sd of a particular observation is, relative to the others, the less it affects the final result.
The above is a non linear least squares problem. There are many solvers available for this, but generally speaking as well as providing the data -- the Az[] and the S[] -- and the observation model -- the Azimuth function -- you need to provide an initial estimate of the state -- the values sought, in your case p[2] ..
It is highly likely that if your initial estimate is wrong the solver will fail.
One way to find this estimate would be to start with a set K of known point indices and seek to expand it. You would start with K being {0,1}. Then look for points that have as many azimuths as possible to points in K, and for such points estimate geometrically their position from the known points and the azimuths, and add them to K. If at the end you have all the points in K, then you can go on to the least squares. If it isn't its possible that a different pair of initial fixed points might do better, or maybe you are stuck.
The latter case is a real possibility. For example suppose you had points p[0],p[1],p[2],p[3] and azimuths A[0,1], A[1,2], A[1,3], A[2,3].
As above we fix the positions of p[0] and p[1]. But we can't compute positions of p[2] and p[3] because we do not know the distances of 2 or 3 from 1. The 1,2,3 triangle could be scaled arbitrarily and still give the same azimuths.
I need to quickly find the k nearest points to a plane (or hyperplane) in 3 (or more) dimensions. Is there a fast way to perform this search, using some sort of clever data structure (similar to how a kd-tree works for k nearest neighbors)?
I know I can rotate the plane and all the points so that the plane is orthogonal to one of the axes, then measure the distance of each point to the plane by simply using the ordinate in that axis. However, the time complexity of this brute force approach is O(N), where (N) is the number of points. Since I have to find the k nearest neighbors for a large number of planes and a large number of points, I need to find a faster algorithm if possible.
I think you can simply use any spatial data structure (kd-tree, R-tree, ...) that supports custom distance functions. You should be able to define a distance function that simply uses the distance to the plane instead of distance to a center point.
How to calculate this distance is described by #Spektre.
I have no idea how that scales, because it may depend on the kNN search algorithm used by the implementation.
However, I believe the standard algorithm (Hjaltason and Samet: "Distance browsing in spatial databases.") should at least be better than O(n).
In case you are using Java, the R-Tree, Quadtree and PH-Tree indexes in my TinSpin library all use this algorithm.
measure the distance by using dot product with hyper plane normal... So let:
n - be the hyperplane normal unit vector
p0 - be any point point on the hyperplane
p[i] - be i-th point from your pointcloud i={ 0,1,2...n-1 }
then the distance to hyperplane plane is:
d = |dot( p[i] - p0 , n )|
as you can see no need to transform/align anything and its O(1) without any expensive operations. I expect that any pre sorting of points or using clever structures will be slower than this in most cases...
Now you got 2 options either compute the d for each point and then quick sort which leads to O(n.log(n)) time and O(n) space complexity.
Or remember k closest points on the run leading to O(k*n) time and O(k) space.
So if k is small (k < log(n)) or you have not enough memory to spare use second approach otherwise use first one ...
Many questions deal with generating normal from depth or depth from normal, but I want to ask about a simple way to generate all the planar surfaces given the depth and normal of an image.
I already have depth and normal of each pixel in the image. For each pixel (ui, vi), assume that we can get its 3D coordinates (xi, yi, zi) with zi as the depth and normal vector (nix, niy, niz). Thus, a unique tangent plane is defined by: nix(x - xi) + niy(y - yi) + niz(z - zi) = 0. Then, for each pixel we can define a unique planar surface by the above equation.
What is a common practice in finding the function f such that f(u, v) = (x, y, z) (from pixel to 3D coordinates)? Is pinhole model (plus the depth data) an effective and accurate one?
How does one generate all the planar surfaces effectively? One way is to iterate through all the pixels in the image and find all the planes, but this seems like an ineffective method.
If its pinhole model
make sure your 3D data is not distorted by projection.
group your points by normal
this is easy or hard depending on the points/normal accuracy. Simply sort the points by normals which leads to O(n.log(n)) where n is number of points.
test/group by planes in single normal group
The idea is to pick 3 points from a group compute plane from it and test which points of the group belongs to it. If too low count you got wrong points picked (not belonging to the same plane) and need to pick different ones. Also if the picked points are too close to each or on the same line you can not get correct plane from it.
The math function for plane is:
x*nx + y*ny + z*nz + d = 0
where (nx,ny,nz) is your normal of the group (unit vector) and (x,y,z) is your point position. So you just compute d from a known point (one of the picked ones (x0,y0,z0) ) ...
d = -x0*nx -y0*ny -z0*nz
and then just test which points are sattisfying this condition:
threshod=1e-20; // just accuracy margin
fabs(x*nx + y*ny + z*nz + d) <= threshod
now remove matched points from the group (move them into found plane object) and apply this bullet again on the remaining points until they count is low or no valid plane is found...
then test another group until no groups are left...
I think RANSAC can speed things up to avoid brute force in this case but never used it myself so google ...
A possible approach for the planes is to consider the set of normal vectors and perform clustering on them (for instance by k-means). Then every cluster can correspond to several parallel surfaces. By evaluating the distance from the origin (a scalar function), you can form sub-clusters which will separate those surfaces. Finally, points at constant distance can belong to different coplanar patches, which you can separate by connected component labelling.
It is likely that clustering on the normal vectors and distance simultaneously (hence in a 4D space) will yield better results and be simpler. Be sure to normalize the vectors. Another option is to represent the vectors by just two parameters (such as spherical angles), but this will lead to a quite non-uniform mapping, and create phase wrapping issues.
So I'm working on simulating a large number of n-dimensional particles, and I need to know the distance between every pair of points. Allowing for some error, and given the distance isn't relevant at all if exceeds some threshold, are there any good ways to accomplish this? I'm pretty sure if I want dist(A,C) and already know dist(A,B) and dist(B,C) I can bound it by [dist(A,B)-dist(B,C) , dist(A,B)+dist(B,C)], and then store the results in a sorted array, but I'd like to not reinvent the wheel if there's something better.
I don't think the number of dimensions should greatly affect the logic, but maybe for some solutions it will. Thanks in advance.
If the problem was simply about calculating the distances between all pairs, then it would be a O(n^2) problem without any chance for a better solution. However, you are saying that if the distance is greater than some threshold D, then you are not interested in it. This opens the opportunities for a better algorithm.
For example, in 2D case you can use the sweep-line technique. Sort your points lexicographically, first by y then by x. Then sweep the plane with a stripe of width D, bottom to top. As that stripe moves across the plane new points will enter the stripe through its top edge and exit it through its bottom edge. Active points (i.e. points currently inside the stripe) should be kept in some incrementally modifiable linear data structure sorted by their x coordinate.
Now, every time a new point enters the stripe, you have to check the currently active points to the left and to the right no farther than D (measured along the x axis). That's all.
The purpose of this algorithm (as it is typically the case with sweep-line approach) is to push the practical complexity away from O(n^2) and towards O(m), where m is the number of interactions we are actually interested in. Of course, the worst case performance will be O(n^2).
The above applies to 2-dimensional case. For n-dimensional case I'd say you'll be better off with a different technique. Some sort of space partitioning should work well here, i.e. to exploit the fact that if the distance between partitions is known to be greater than D, then there's no reason to consider the specific points in these partitions against each other.
If the distance beyond a certain threshold is not relevant, and this threshold is not too large, there are common techniques to make this more efficient: limit the search for neighbouring points using space-partitioning data structures. Possible options are:
Binning.
Trees: quadtrees(2d), kd-trees.
Binning with spatial hashing.
Also, since the distance from point A to point B is the same as distance from point B to point A, this distance should only be computed once. Thus, you should use the following loop:
for point i from 0 to n-1:
for point j from i+1 to n:
distance(point i, point j)
Combining these two techniques is very common for n-body simulation for example, where you have particles affect each other if they are close enough. Here are some fun examples of that in 2d: http://forum.openframeworks.cc/index.php?topic=2860.0
Here's a explanation of binning (and hashing): http://www.cs.cornell.edu/~bindel/class/cs5220-f11/notes/spatial.pdf
I have a number of cuboids whose positions and sizes are given with minimum and maximum x, y and z co-ordinates (so they are parallel to the main axes).
e.g. I might have the following 3 cuboids:
10.5 <= x <= 39.4, 90.73 <= y <= 110.2, 90.23 <= z <= 95.87
20.1 <= x <= 30.05, 9.4 <= y <= 37.6, 0.1 <= z <= 91.2
10.2 <= x <= 10.3, 0.1 <= y <= 99.8, 23.7 <= z <= 24.9
If I then give a point (e.g. (25.3,10.2,90.65)), is there a way to quickly determine which cuboid(s) I'm in?
Obviously I could just iterate over all the cuboids, but there are potentially millions of them, and I need this to go faster than simple iteration (something O(log n) or better would be great).
This sounds to me like a "fuzzy matching" type problem, and I notice that Apache Lucene supports range queries, but this seems to work the opposite way round (finding a point in a cuboid rather than a cuboid containing a point).
To slightly complicate matters further, the number of dimensions might be larger than 3 (it could be up 20); i.e. I might be looking for "hypercuboids" rather than cuboids.)
you verging into the territory of "Binary Space Partitioning" and "Collision Detection"; essentially the ideas are basically storing the cuboids into a tree type structure, which divides the space they occupy into neat little boxes. The decision about which "part of space" each cuboid occupies is made during the insertion into the tree strucutre.
Do a google search on Octrees.
Efficently dividing 3D space, and the objects contained within that space is quite a big portion of computer science; mostly used in the development of computer games. Some of the algorithms take into consideration a time factor, i.e. that the objects move between partition spaces.
One straightforward way to accelerate this query is by constructing the following uniform grid data structure (often called bins) as a preprocessing step: Put an n x n x n (in 3d) grid over your scene and for every cell of the grid store a pointer to all the cuboids intersecting that cell. Now, for a query point you can compute directly in which cell it is in the uniform grid, and then you have to check only the cuboids associated to that cell, and not all the cuboids.
Depending how big the space is and how varying the cuboid sizes are this method might not be very efficient because you it might be difficult to chose a good n resolution to accelerate enough and not need an enormous amount of cells. To overcome this you might want to try to look into more adaptive ways to partition the space, such as kd-trees (kd-trees at wikipedia), which are basically binary trees partitioning the space with axis aligned planes: See for an example here where the red plane divides the box into two parts and then the green in smaller parts, then the blue...
A query using kd-tree would first traverse down to the leaf of the kd-tree where the the query point is located and then check with the local cuboids in that cell. Other space partitioning data structure options can be found here.
Another option would be to use bounding volume hierarchies, which group objects together in bounding volumes, and then group bounding volumes into larger bounding volumes and so on... to get a hierarchy of bounding volumes. These adapt better to a scene and can easier handle scenes where the objects move, but I would think for your setting space partitioning could work well... Anyways, for more details see this book chapter.