ARCore NDK API questions - android-ndk

From looking at https://developers.google.com/ar/reference/c
some things are not clear to me
Method ArPointCloud_getData mentioned at https://developers.google.com/ar/reference/c/group/ar-point-cloud#arpointcloud_getdata says that
"Each point is represented by four consecutive values in the array; first the X, Y, Z position coordinates, followed by a confidence value."
Question: What about the unique point IDs (identifiers)? They are not included in the output from this method?
Question: Is there a convenience method for taking four float values and using them to construct a new ArPoint object including the pose of the point ?
Method ArPointCloud_getPointIds mentioned at https://developers.google.com/ar/reference/c/group/ar-point-cloud#arpointcloud_getpointids says that
"Each point has a unique identifier (within a session) that is persistent across frames. That is, if a point from Point Cloud 1 has the same id as the point from Point Cloud 2, then it represents the same point in space."
Question: How can a unique identifier (point id) be used to get the X, Y, Z, confidence level for a single unique point ?

Related

How to apply a Pearson Correlation Analysis over all pairs of pixels of a DataArray as a Correlation Matrix?

I am facing serious difficulties in generating a correlation matrix (pixel by pixel) of a single Netcdf with dimensions ('lon', 'lat', 'time'). My final intent is to generate what one calls a Teleconnectivity Map.
This Map is composed of correlation coefficients. Each pixel has a value that represents the highest correlation value (in module) found in the correlation matrix over all pairs of pixels in the DataArray.
Therefore, in order to create my Teleconnectivity Map, instead of looping over every longitude ('lon') and every latitude ('lat') and later checking all possible combinations of correlation for which one was higher in magnitude, I was thinking of applying the xr.apply_ufunction with a wrapped correlation function inside.
Despite my efforts, I still don't get what is truly happening behind the scenes in the xr.apply_ufunc. All I managed to do was as a single Resultant Matrix with all pixels equals to 1 (perfect correlation).
See code below:
import numpy as np
import xarray as xr
def correlation(x, y):
return np.corrcoef(x, y)[0,0] # to return a single correlation index, instead of a matriz
def wrapped_correlation(da, x, coord='time'):
"""Finds the correlation along a given dimension of a dataarray."""
from functools import partial
fpartial = partial(correlation, x.values)
return xr.apply_ufunc(fpartial,
da,
input_core_dims=[[coord]] ,
output_core_dims=[[]],
vectorize=True,
output_dtypes=[float]
)
# testing the wrapped correlation for a sample data:
ds = xr.tutorial.open_dataset('air_temperature').load()
# testing for a single point in space.
x = ds['air'].sel(dict(lon=1, lat=92), method='nearest')
# over all points in the DataArray
Corr_over_x = wrapped_correlation(ds['air'], x)
Corr_over_x# notice that the resultant DataArray is composed solely of ones (perfect correlation match). This is impossible. I would expect to have different values of correlation for each pixel in here
# if one would plot the data, I would be composed of a variety of correlation values (see example below):
Corr_over_x.plot()
This is an important asset for meteorologists and Remote Sensing researches. It allows the evaluation of potential geophysical patterns over a given area of study.
I thank you for your time, and I hope hearing from you soon.
Sincerely yours,
Firstly, you need to use np.corrcoef(x, y)[0,1]. In the end, you don't need to use partial at all, see below:
def correlation(x1, x2):
return np.corrcoef(x1, x2)[0,1] # to return a single correlation index, instead of a matriz
def wrapped_correlation(da, x, coord='time'):
"""Finds the correlation along a given dimension of a dataarray."""
return xr.apply_ufunc(correlation,
da,
x,
input_core_dims=[[coord],[coord]] ,
output_core_dims=[[]],
vectorize=True,
output_dtypes=[float]
)
I have managed to solve my question. The script has become a bit long. Nevertheless, it does what it was previously intended.
The code is adapted from this reference
Since it is too long to show a snippet in here, I am posting a link to my Github account in which the algorithm (organized in a package named Teleconnection_using_xarray_data) can be checked here.
The package has two modules with similar results.
The first module (teleconnection_with_connecting_pathways) is slower than the second (teleconnection_via_numpy), but it allows to evaluate the connecting pathways between the partial teleconnection maps.
The second, only returns the resultant teleconnection map, without the connecting lines (geopandas-Linestrings), though it is much faster.
Feel free to colaborate. If possible, I would like to combine both modules ensuring speed and pathway analyses in the Teleconnection algorithm.
Sincerely yours,
Philipe Leal

KDTree with periodic boundary conditions and pair distances in output

I want to run a nearest neighbour search over >10k points that lie within a periodic box and returns me the distances of these points together with their indices.
So far I tried sklearn.neighbors.KDTree(positions).query_radius(positions, r=maximum_distance,return_distance=True) which returns me the nearest neighbour distances within a max. radius, however it does not work for periodic boundary condition (PBC). Another method I have explored is scipy.spatial.cKDTree(positions, boxsize=box_size).query_pairs(r=maximum_distance) which works with PBC but does not return distances between pairs.
Would it be possible to extend sklearn.neighbors.KDTree with the capability to handle PBC as scipy.spatial.cKDTreedoes?
Or
Would it be possible to extend scipy.spatial.cKDTree with the capability to return pair distances?
The answer is:
scipy.spatial.cKDTree().query()

How to get the longest and the shorted edge of a room boundary?

In the Revit API, I'm trying to get the longest and the shortest edge of a room boundary. (room is a rectangle)
For now, I have a list of 4 bounding edges of a room.(rb_curves) These are curves. I'm trying to sort this list by the length of each curve.
sorted_rb_curves = sorted(rb_curves, key=?)
I'm wondering what I can assign to the 'key' in order to sort.
Your help would be much appreciated!
An easy way to sort lists based on object attributes is using lambda. In your case it would be:
rb_curves.sort(key=lambda x: x.Length)
where Length is the attribute you are sorting by. Note this modifies your original list (as opposed to creating a new sorted list)
This would mean rb_curves[0] is the shortest Boundary, rb_curves[-1] is the longest.

C/C++ Delaunay lightweight library that preserves input order

Unfortunately I cannot find a C++ (or C or C#) library for performing Delaunay triangulations on a set of points (2D or 2.5D) which is able to deliver the output in an input-aware manner.
That is, given a set of points P_1, P_2, .. P_N, the output should consist of a set of triplets (a triangle soup) (i_a, i_b, i_c), where i_a, i_b and i_c are the indices of the P_i points (hence numbers between 1 and N). I've tried Fade2D, but I've found it very wasteful in terms of how it handles input (one has to pack vertices in its own point2d structure), and the output disregards whatever indexing the input had, delivering a set of coordinates together with another ordering of these vertices.
I'm the author of Fade2D and this is a late answer, I was not aware of your question. You do not need to pack your coordinates into the Point2 class before you insert them. There is also an insert method that takes an array of coordinates:
void Fade2D::insert(int numPoints,double * aCoordinates,Point2 ** aHandles);
This method takes an array of coordinates (x0,y0,x1,y1,...,xn,yn) and returns a vector of Point2* pointers that has exactly the same order. That's virtually no overhead. For your convenience, you can use
Point2::setCustomIndex() and
Point2::getCustomIndex()
to store and retrieve your own indices.

KD Tree alternative/variant for weighted data

I'm using a static KD-Tree for nearest neighbor search in 3D space. However, the client's specifications have now changed so that I'll need a weighted nearest neighbor search instead. For example, in 1D space, I have a point A with weight 5 at 0, and a point B with weight 2 at 4; the search should return A if the query point is from -5 to 5, and should return B if the query point is from 5 to 6. In other words, the higher-weighted point takes precedence within its radius.
Google hasn't been any help - all I get is information on the K-nearest neighbors algorithm.
I can simply remove points that are completely subsumed by a higher-weighted point, but this generally isn't the case (usually a lower-weighted point is only partially subsumed, like in the 1D example above). I could use a range tree to query all points in an NxNxN cube centered on the query point and determine the one with the greatest weight, but the naive implementation of this is wasteful - I'll need to set N to the point with the maximum weight in the entire tree, even though there may not be a point with that weight within the cube, e.g. let's say the point with the maximum weight in the tree is 25, then I'll need to set N to 25 even though the point with the highest weight for any given cube probably has a much lower weight; in the 1D case, if I have a point located at 100 with weight 25 then my naive algorithm would need to set N to 25 even if I'm outside of the point's radius.
To sum up, I'm looking for a way that I can query the KD tree (or some alternative/variant) such that I can quickly determine the highest-weighted point whose radius covers the query point.
FWIW, I'm coding this in Java.
It would also be nice if I could dynamically change a point's weight without incurring too high of a cost - at present this isn't a requirement, but I'm expecting that it may be a requirement down the road.
Edit: I found a paper on a priority range tree, but this doesn't exactly address the same problem in that it doesn't account for higher-priority points having a greater radius.
Use an extra dimension for the weight. A point (x,y,z) with weight w is placed at (N-w,x,y,z), where N is the maximum weight.
Distances in 4D are defined by…
d((a, b, c, d), (e, f, g, h)) = |a - e| + d((b, c, d), (f, g, h))
…where the second d is whatever your 3D distance was.
To find all potential results for (x,y,z), query a ball of radius N about (0,x,y,z).
I think I've found a solution: the nested interval tree, which is an implementation of a 3D interval tree. Rather than storing points with an associated radius that I then need to query, I instead store and query the radii directly. This has the added benefit that each dimension does not need to have the same weight (so that the radius is a rectangular box instead of a cubic box), which is not presently a project requirement but may become one in the future (the client only recently added the "weighted points" requirement, who knows what else he'll come up with).

Resources