I am looking for a datastructure to store irregular elevation data {xi,yi,zi} that facilitates fast look-up of points within a xy range.
From what I gather a kd tree should be suitable for this? And also fairly simple to implement?
However the number of points in the elevation dataset may be enormous. It may therefore not be possible to process all points in one go. Instead I aim to divide the xy region into tiles and process each tile separately:
The points within the green rectangle are those needed for tile 1. When I move into tile 2 I will need the points within a green rectangle centered around tile 2. The 2 rightmost point in the green rectangle around tile 1 will still be needed. The other points could be swapped out of memory if needed. In addition 4 more points will be needed for tile 2.
A kd tree may therefore not be optimal since this would require me to rebuild the complete tree for each new tile? Would a R-tree be a better choice?
The point themselves should be stored on disk in some clever format and read into memory just before they are needed. Before I start processing tile 1, I could tell the data structure maintaining the points, that next I will be needing tile 2 and it could then begin to read the necessary points from disk in a separate thread.
I was considering using smaller tiles for loading points into the datastructure. For instance the points in the figure could be divided into 16x16 tiles.
Are there any libraries in C/C++ that implement this functionality?
Related
two questions about Geometry Nodes.
To retopologize certain mesh, I'd like to subdivide a certain edge that both neighbors are triangle.
Questions:(https://i.stack.imgur.com/kG3Us.png)
1.How to find the edge with Geometry Node?
2.Is there a general strategy to find a specific elements(index of vertex, edge, and face)?
History:
I want to retopo the shape(=A) of being after Convex Hull Node because this is messy.
To do so, I chose a way that is of shrinking a simple shape onto A.
Bounding Box > Subdivide > Set Position is the order of the Nodes, but large areas still remain.
To fit the shape more precisely, I am trying to subdivide adittionaly only on the areas and then finally Set Position Node again to fit to the original messy A.
After I have tried some ideas (bellow), now I am trying to do a way of extruding the face, scaling the top selection to zero, merge the face and the set these new 'vertex' to the messy A.
And I find the edge between the face remain.š¤£
This is my question above. How to fit these edges on to A?
Ideas I have tried:
Separate the large areas>Subdivide>Join Geometry>Set Position makes holes.
Separate the large areas>Subdivide>Convex Hull>Boolean Mesh makes messy topology
The way of not subdividing the large area, such as scaling the bounding box up enough to disappear the large area, will result overstretch to other mesh, which looks more difficult to solve, so I prefer to solve large flatten area If I can.
(https://i.stack.imgur.com/QzHKa.jpg)
I want to do retopology. I want to fit a new shape that has a clean topology onto the original messy shape.
I am working on a project where I have a model which does instance segmentation to segment nuclei in a image. Next step would be to label these segmented nuclei. I am scaling the labeling by processing images as tiles.
The issue I am facing now is to come up with a way to handle incorrect labeling. Basically , when there is a object which gets split due to tiling they are labelled differently .
tile_size = 2048
for x in range(0, vec_arr.shape[2], tile_size):
x_max = min([vec_arr.shape[2], x + tile_size])
for y in range(0, vec_arr.shape[1], tile_size):
y_max = min([vec_arr.shape[1], y + tile_size])
The above code explains how I am tiling a image. I am using this repo(https://github.com/MouseLand/cellpose/blob/master/cellpose/dynamics.py#L574) as the basis for labeling images since I am using their network. I am looking for ideas on how I can identify objects which are connected across tiles and fill them with same values.
Currently I maintain a counter of number of objects labelled in a tile and start labeling from that value.
I am interested in knowing on how I can identify same objects across tiles.
This is not easy.
First of all you need an overlap in your tiling. Each tile should overlap the surrounding ones by some amount, which you then cut off when recomposing the larger image. The overlap amount should be at least the size of a nucleus, but preferably larger. The extra space is meant to guarantee that a nucleus that straddles the tile edge is detected identically in the two tiles where you can see it.
Next, when cutting off the overlap region and decomposing the larger image, a nucleus that straddles the tile edge (is partially in the overlap region) must be either preserved entirely or removed completely depending on which tile it ābelongs toā. There are different ways to define this. For example, you can compute the centroid of the nucleus, and determine in which tile that is, and remove the nucleus from the other tile.
Thus, each nucleus is detected in exactly one tile. However, if the overlap region is not large enough, then a detection for a nucleus might not have the same shape in the two overlapping tiles, leading to two different centroids for the same nucleus. In this case, the nucleus could be perceived as not part of either tile, or part of both tiles. It is important to understand the detection algorithm, so that you can find the right overlap size that will guarantee identical detection for the two tiles.
I'm looking for an efficient way to display lots of spheres using directx 11. The spheres are defined by (x,y,z,r) where (x,y,z) are coordinates in space and r is the radius. I want to display only the spheres that can be seen, meaning that spheres that are not in the field of view and spheres that are too small to be seen wouldn't be drawn. However, if a group of spheres smaller than one pixel is at least as big as one pixel, then I want to display the most predominant color. Spheres have only one color and different levels of transparency. Any help would be appreciated and incomplete answers are acceptable.
You need several things. First an indexed unit sphere geometry, second a buffer to store the sphere instance properties ( position, radius and color ) and third a small buffer for the API parameters yet to come. The three combines in a single 'ID3D11DeviceContext::DrawIndexedInstancedIndirect'
The remaining question is "how to feed the instance buffer ?". cpu is easy, just apply frustum culling, sort back to front because of the transparency and apply a merge based on the screen projection, update the buffer and use 'ID3D11DeviceContext::DrawIndexedInstanced'.
gpu version will do the same thing with compute shaders but will be harder to implement. The advantage, zero cpu/gpu synchronization and should support far more instance.
Say I have a 2D plane, covered with polygons (identified as an array of vertexes), analogous to:
Lets say I also have a point with coordinates on this plane, what is the easiest method to return which of the polygons the point is present in?
Although this example lists 4 polygons, it would be simple to run a check on each polygon to see if the point is within it, but I am building a system that presently has about 150 polygons, and could extend up to thousands, so doing it that way could become very slow.
So, are there any solutions to doing this that do not incur iterating through all available polygons, and checking if the point is present?
You can use a kd-tree or a r-tree. It can reduces the search space. You can also look for a quadtree. You can choose the quad size to fit the polygons and to minimize overlapping bounding boxes.
I have a 3d volume given by a binary space partition tree. Usually these are made from polygon models, and the splitted polygons already stored inside the tree nodes.
But mine is not, so I have no polygons. Every node has nothing but it's cut plane (given by normal and origin distance for example). Thus the tree still represent a solid 3d volume, defined by all the cuts made. However, for visualisation I need a polygonal mesh of this volume. How can that be reconstructed efficiently?
The crude method would be to convert the infinite half spaces of the leaves to large enough polhedrons (eg. cubes) and push every single one of them upwards the tree, cutting it by every node's plane it passes. That seems extremely costly, as the tree may be unbalanced (eg. if stupidly made from a convex polyhedra). Is there any classic solution?
In order to recover the polygonal surface you need to intersect the planes. Where each vertex of a polygon is generated by an intersection of three planes and each edge by an intersection of 2 planes. But making this efficient and numerical stable is no trivial task. So i propose to use qhalf that is part of qhull. A documentation of the input and ouput of qhalf can be found here. Of course you can use qhull (and the functionality from qhalf) as a library.