A 20x20 matrix is given in which ones stand for barriers and zeros are permitted to be passed. Given the coordinates of start and the goal, I should find a way to the goal using A*. But I don't know what to consider as h(n) and what for g(n).
Here is an instance of a possible area (matrix) the start point is in red and the goal is blue:
g(n) is the length of the path from the start and current position
h(n) is an estimate which must be minoring the cost from current position to the goal for example the Manathan distance :
With (x1,y1) the coordinates of goal.
abs(y-y1)+abs(x-x1)
g(n) = Number of hops (i.e. boxes traversed on the grid) taken in the path from the start node to n
and h(n) can be Euclidean distance from n=(nx,ny) to the goal g=(gx,gy) i.e.
h(n)=sqrt((nx-gx)^2+(ny-gy)^2)
Related
I need to quickly find the k nearest points to a plane (or hyperplane) in 3 (or more) dimensions. Is there a fast way to perform this search, using some sort of clever data structure (similar to how a kd-tree works for k nearest neighbors)?
I know I can rotate the plane and all the points so that the plane is orthogonal to one of the axes, then measure the distance of each point to the plane by simply using the ordinate in that axis. However, the time complexity of this brute force approach is O(N), where (N) is the number of points. Since I have to find the k nearest neighbors for a large number of planes and a large number of points, I need to find a faster algorithm if possible.
I think you can simply use any spatial data structure (kd-tree, R-tree, ...) that supports custom distance functions. You should be able to define a distance function that simply uses the distance to the plane instead of distance to a center point.
How to calculate this distance is described by #Spektre.
I have no idea how that scales, because it may depend on the kNN search algorithm used by the implementation.
However, I believe the standard algorithm (Hjaltason and Samet: "Distance browsing in spatial databases.") should at least be better than O(n).
In case you are using Java, the R-Tree, Quadtree and PH-Tree indexes in my TinSpin library all use this algorithm.
measure the distance by using dot product with hyper plane normal... So let:
n - be the hyperplane normal unit vector
p0 - be any point point on the hyperplane
p[i] - be i-th point from your pointcloud i={ 0,1,2...n-1 }
then the distance to hyperplane plane is:
d = |dot( p[i] - p0 , n )|
as you can see no need to transform/align anything and its O(1) without any expensive operations. I expect that any pre sorting of points or using clever structures will be slower than this in most cases...
Now you got 2 options either compute the d for each point and then quick sort which leads to O(n.log(n)) time and O(n) space complexity.
Or remember k closest points on the run leading to O(k*n) time and O(k) space.
So if k is small (k < log(n)) or you have not enough memory to spare use second approach otherwise use first one ...
Many questions deal with generating normal from depth or depth from normal, but I want to ask about a simple way to generate all the planar surfaces given the depth and normal of an image.
I already have depth and normal of each pixel in the image. For each pixel (ui, vi), assume that we can get its 3D coordinates (xi, yi, zi) with zi as the depth and normal vector (nix, niy, niz). Thus, a unique tangent plane is defined by: nix(x - xi) + niy(y - yi) + niz(z - zi) = 0. Then, for each pixel we can define a unique planar surface by the above equation.
What is a common practice in finding the function f such that f(u, v) = (x, y, z) (from pixel to 3D coordinates)? Is pinhole model (plus the depth data) an effective and accurate one?
How does one generate all the planar surfaces effectively? One way is to iterate through all the pixels in the image and find all the planes, but this seems like an ineffective method.
If its pinhole model
make sure your 3D data is not distorted by projection.
group your points by normal
this is easy or hard depending on the points/normal accuracy. Simply sort the points by normals which leads to O(n.log(n)) where n is number of points.
test/group by planes in single normal group
The idea is to pick 3 points from a group compute plane from it and test which points of the group belongs to it. If too low count you got wrong points picked (not belonging to the same plane) and need to pick different ones. Also if the picked points are too close to each or on the same line you can not get correct plane from it.
The math function for plane is:
x*nx + y*ny + z*nz + d = 0
where (nx,ny,nz) is your normal of the group (unit vector) and (x,y,z) is your point position. So you just compute d from a known point (one of the picked ones (x0,y0,z0) ) ...
d = -x0*nx -y0*ny -z0*nz
and then just test which points are sattisfying this condition:
threshod=1e-20; // just accuracy margin
fabs(x*nx + y*ny + z*nz + d) <= threshod
now remove matched points from the group (move them into found plane object) and apply this bullet again on the remaining points until they count is low or no valid plane is found...
then test another group until no groups are left...
I think RANSAC can speed things up to avoid brute force in this case but never used it myself so google ...
A possible approach for the planes is to consider the set of normal vectors and perform clustering on them (for instance by k-means). Then every cluster can correspond to several parallel surfaces. By evaluating the distance from the origin (a scalar function), you can form sub-clusters which will separate those surfaces. Finally, points at constant distance can belong to different coplanar patches, which you can separate by connected component labelling.
It is likely that clustering on the normal vectors and distance simultaneously (hence in a 4D space) will yield better results and be simpler. Be sure to normalize the vectors. Another option is to represent the vectors by just two parameters (such as spherical angles), but this will lead to a quite non-uniform mapping, and create phase wrapping issues.
I'm trying to code the Ritter's bounding sphere algorithm in arbitrary dimensions, and I'm stuck on the part of creating a sphere which would have 3 given points on it's edge, or in other words, a sphere which would be defined by 3 points in N-dimensional space.
That sphere's center would be the minimal-distance equidistant point from the (defining) 3 points.
I know how to solve it in 2-D (circumcenter of a triangle defined by 3 points), and I've seen some vector calculations for 3D, but I don't know what the best method would be for N-D, and if it's even possible.
(I'd also appreciate any other advice about the smallest bounding sphere calculations in ND, in case I'm going in the wrong direction.)
so if I get it right:
Wanted point p is intersection between 3 hyper-spheres of the same radius r where the centers of hyper-spheres are your points p0,p1,p2 and radius r is minimum of all possible solutions. In n-D is arbitrary point defined as (x1,x2,x3,...xn)
so solve following equations:
|p-p0|=r
|p-p1|=r
|p-p2|=r
where p,r are unknowns and p0,p1,p2 are knowns. This lead to 3*n equations and n+1 unknowns. So get all the nonzero r solutions and select the minimal. To compute correctly chose some non trivial equation (0=r) from each sphere to form system of n+1 =equations and n+1 unknowns and solve it.
[notes]
To ease up the processing you can have your equations in this form:
(p.xi-p0.xi)^2=r^2
and use sqrt(r^2) only after solution is found (ignoring negative radius).
there is also another simpler approach possible:
You can compute the plane in which the points p0,p1,p2 lies so just find u,v coordinates of these points inside this plane. Then solve your problem in 2D on (u,v) coordinates and after that convert found solution form (u,v) back to your n-D space.
n=(p1-p0)x(p2-p0); // x is cross product
u=(p1-p0); u/=|u|;
v=u x n; v/=|v|; // x is cross product
if memory of mine serves me well then conversion n-D -> u,v is done like this:
P0=(0,0);
P1=(|p1-p0|,0);
P2=(dot(p2-p0,u),dot(p2-p0,v));
where P0,P1,P2 are 2D points in (u,v) coordinate system of the plane corresponding to points p0,p1,p2 in n-D space.
conversion back is done like this:
p=(P.u*u)+(P.v*v);
My Bounding Sphere algorithm only calculates a near-optimal sphere, in 3 dimensions.
Fischer has an exact, minimal bounding hyper-sphere (N dimensions.) See his paper: http://people.inf.ethz.ch/gaertner/texts/own_work/seb.pdf.
His (C++/Java)code: https://github.com/hbf/miniball.
Jack Ritter
jack#houseofwords.com
There are N points on a 2D grid (x,y). I need to find the shortest path, from point A to point B, but I can only travel from one point to another and I can't travel between two points if the distance between them is farther than a distance D. I thought it might be solved by using some kind of modified Dijkstra's algorithm, but I'm not sure how, because I've never implemented it before, just studied it on Wiki.
Well, Dijkstra finds shortest paths in graphs. So just consider the grid points to be nodes in a graph with edges between each node S and all other nodes T such that dist(S, T) <= D. You don't have to actually construct the graph because the edges are easily determined as needed by Dijkstra. Just check all nodes in a square around S with radius D. A S-T edge exists iff (Sx - Tx)^2 (Sy - Ty)^2 <= D^2.
Wiki explanation is sufficient for this.
Dijkstra's algorithm takes 3 inputs. The Graph, Starting node and Ending node.
To construct the graph just do this
For i 1..n in points
For j i+1..n in points
if(dist(points[i],points[j])<=D)
add j to childs of i
add i to childs of j
After constructing the graph, perform dijkstra.
The subtlety of a question like this lies in a critical definition - what is the measure of distance in your grid?
The are many different shortest path problems and solutions, and they are studied throughout mathematics. They are each characterised by the 'topology' of the area being searched. Consider a few distinct topologies with their own solutions:
A one sided piece of paper
Suppose your grid represents coordinates on a piece of paper - the shortest path is easy to find, as it is simply a straight line between those points.
The surface of the moon
If your grid represents locations on the moon in terms of latitude and longitude, the shortest path is an arc along the moon's surface - If you drove "in a straight line" between two points on the moon, you would be travelling in an arc, because of the moon's curvature.
Road Intersections
If you want to find the distance between two intersections in a grid of roads, where the traffic on each road has a different speed, and you can only travel along the roads, then you can find the shortest path using Dijkstra's algorithm.
One way road intersections
A slight variation of the above - we only need to consider roads in one direction. There might not be any paths in this case.
Summary
To give a good solution, we need to understand the topology of your grid. If the distance is pythagerous's theorem than that indicates euclidean geometry (like in the piece of paper example), so the solution is a straight line.
Is it possible you mean that you can travel between any two points if the are closer than D - like flying a plane between airports, for example?
EDIT: I didn't see your comment because you didn't use #. In your case your grid is like the airports a plane can fly between. The shortest path is found using Dijkstra's algorithm - the immediate neighbours of a point are all points closer than D. Find them, represent it all as a graph, and use Dijkstra's algorithm.
I would suggest using the formula to find the distance between 2 points i.e sqrt((x2-x1)^2+(y2-y1)^2). This distance is always the shortest between 2 points.
I'm using a static KD-Tree for nearest neighbor search in 3D space. However, the client's specifications have now changed so that I'll need a weighted nearest neighbor search instead. For example, in 1D space, I have a point A with weight 5 at 0, and a point B with weight 2 at 4; the search should return A if the query point is from -5 to 5, and should return B if the query point is from 5 to 6. In other words, the higher-weighted point takes precedence within its radius.
Google hasn't been any help - all I get is information on the K-nearest neighbors algorithm.
I can simply remove points that are completely subsumed by a higher-weighted point, but this generally isn't the case (usually a lower-weighted point is only partially subsumed, like in the 1D example above). I could use a range tree to query all points in an NxNxN cube centered on the query point and determine the one with the greatest weight, but the naive implementation of this is wasteful - I'll need to set N to the point with the maximum weight in the entire tree, even though there may not be a point with that weight within the cube, e.g. let's say the point with the maximum weight in the tree is 25, then I'll need to set N to 25 even though the point with the highest weight for any given cube probably has a much lower weight; in the 1D case, if I have a point located at 100 with weight 25 then my naive algorithm would need to set N to 25 even if I'm outside of the point's radius.
To sum up, I'm looking for a way that I can query the KD tree (or some alternative/variant) such that I can quickly determine the highest-weighted point whose radius covers the query point.
FWIW, I'm coding this in Java.
It would also be nice if I could dynamically change a point's weight without incurring too high of a cost - at present this isn't a requirement, but I'm expecting that it may be a requirement down the road.
Edit: I found a paper on a priority range tree, but this doesn't exactly address the same problem in that it doesn't account for higher-priority points having a greater radius.
Use an extra dimension for the weight. A point (x,y,z) with weight w is placed at (N-w,x,y,z), where N is the maximum weight.
Distances in 4D are defined by…
d((a, b, c, d), (e, f, g, h)) = |a - e| + d((b, c, d), (f, g, h))
…where the second d is whatever your 3D distance was.
To find all potential results for (x,y,z), query a ball of radius N about (0,x,y,z).
I think I've found a solution: the nested interval tree, which is an implementation of a 3D interval tree. Rather than storing points with an associated radius that I then need to query, I instead store and query the radii directly. This has the added benefit that each dimension does not need to have the same weight (so that the radius is a rectangular box instead of a cubic box), which is not presently a project requirement but may become one in the future (the client only recently added the "weighted points" requirement, who knows what else he'll come up with).