I need to create 3000x3000 bit map for coords of my robots. In theory I have a array MxM, M=3000, and if my robot see somthing, then in coords [5][5] example I put 1, if see nothing then 0.
When I tried create int[][] b = new int[3000][3000]
I have a problem - outofmemory.
I tried use RMS, but I can create 3000 rows, but only 50 cols
I think to use textfile, but I need a custom update, and working with textfile very hard in j2me.
Thanks for reply!
Some approaches:
Store your coordinates in a file, and
load and update only those
rows/columns of data into memory,
that surround the robot (maybe a
10x10 matrix). Buffering.
Use a quadtree algorithm to store
your coordinates. You may have to use
the external file approach here too,
but maybe you can think of something
better.
Related
I am working on a project that requires fast performing proximity queries on a database with location data.
In my database I want to store locations with additional information. Idea is that user opens a map on a certain location and my program only fetches the markers visible to the user. If I plan on having millions of values, fetching markers from NYC when I'm zoomed in on London would make the map activity work extremely slow and the data I send back from the db would be HUGE.
That's why when the user opens the map I want to fetch all the markers that are for example in 10km distance from the center of the map. (I'm okay with fetching markers outside of the visible area. I just don't want to fetch markers that are 100km away)
After a thorough research I chose the S2 Geometry Library approach with Hilbert's space filling curve.
The idea of mapping a 2D value to one integer value, where the longer a shared prefix between two indexes is, the spatially closer they are together, was a big selling point.
I need my database to be able to perform this SELECT query lightning fast and I expect to have A LOT of data in the future so operating on only one column is a big plus.
Also the thing that intrigued me the most was the ability to perform fast proximity searches because of the fact that two numbers that are close to each other on the map will have 1D indexes also close to each other.
The idea looks very simple (If I don't miss anything).
The thing I'm having problems with is how to (If it's even possible) pick the min value and max value on the 1D plane to be sure I'm scanning the whole visible area.
Most of the answers and tutorials I find on the internet propose a solution where you take a bounding area full of smaller S2 index "boxes" and then scan every index in the database to see if it's contained in one of the "boxes" from the array. This is easy to do but when you have 50 milion records it's not possible to go through every single one of them to see if it's in on of the "boxes".
What I have in mind is a solution where you take the minimum value of the area and the maximum value of the area you're searching in and you perform something in the lines of SELECT (...) WHERE s2cellid BETWEEN min AND max
For example I'm in a location 47194c and want to fetch all markers in 10km distance so I take a value that's x to the left of the indeks and a value that's x to the right of the index and perform a BETWEEN 47194c-x AND 47194c+x query
Is something like that possible with the S2 library?
If no then what approach should I take to make my queries as quick as possible?
Thanks in advance :)
[I plan on using PostgreSQL]
I have a poly data that looks like this:
What I want to obtain is something that would be smoother, something like this (edited in paint for demonstration purpose):
So far I've tried the following filters:
vtkWindowedSincPolyDataFilter
vtkSmoothPolyDataFilter
However, the closest I got was with the first one, with a result like this:
Is there any filter or strategy in VTK that would allow me to reach something really close to the second picture?
Thanks in advance.
I suggest you play with the convergence and iterations parameter of vtkSmoothPolyDataFilter to achieve the optimal result for a single application of that filter. If this is not satisfying, why don't you go ahead and apply it multiple times, one after the other? This is what I would do if I had this problem at my hands.
One other solution could be to generate a binary vtkImageData from this polydata, using vtkPolyDataToImageStencil, then smooth the image using something like vtkImageGaussianSmooth and then go back to the polydata world using vtkMarchingCubes
You'll need to tweak some parameters for each filter, but that should work and give you more control other the smoothing
I've got 3D voxel data, and I want to re-package it for memory efficiency and fast access. The data is generated in a regular octree, one integer value per cell. Unfortunately the data is not sparse, but the cells with the same value should be connected.
Example for one slice:
[11122]
[11223]
[12222]
[44444]
My current idea is to use a kD-Tree, preferably left-balanced, but I'm not sure if there is an efficient algorithm to generate this.
I've got some ideas, but I was hoping that this is one of those problems that already has established algorithms, or at least a name I could google for.
How about OctoMap? As I understand, it's like an Octree, but merges adjacent occupied areas into regions to save memory. But I don't know much about it.
EDIT
You could also try my PH-Tree. It works like a octree, but is quite memory efficient because every node only stores bits that are different from the parent node. You could actually store your integer value as a 4th dimension. Contrary to intuition, a 4D tree may require less space than a 3D tree and it may be faster (explanation is in the PDF that can be found in the link above). If your integer is the 4th dimension, than any sub-tree in the tree will only have entries with 'similar' integers, may be that is sufficient for your case? Also, any node contains only close neighbours, but close neighbours are not necessarily in the same (or adjacent) nodes.
One further link: http://www.openvdb.org/ . Why did I only find this after asking the question? It's like asking for something in the supermarket only to find out that you're standing next to it.
I ended up doing something simpler, because I needed a solution: I convert the voxel volume into a stack of 2D planes, and each plane stores at which point the value changes to the next higher plane. That way the voxel data is only compacted vertically, but it seems to be "good enough" for now. I'll crunch the numbers (space requirement vs. performance) for other data structures if I have some free time.
What's the best way to determine if a point is within a certain distance of a GEOJSON polygon? Should one use TurfJS buffer method (https://github.com/Turfjs/turf-buffer#turf-buffer)? Can one perform queries on the buffered polygon?
It's clear to me one could us the TurfJS' inside method (https://github.com/Turfjs/turf-inside) to determine whether a point is within a polygon. I'm just curious what the best approach would be for finding whether or not a point is inside of a buffered polygon.
For example:
I have a number of neighborhoods provided as a GEOJSON polygon files. I also have a set of locations/addresses for employees (already geocoded to lat/long coordinates). What would be the best way to see whether or not my employees live within 10 miles of a given neighborhood polygon?
Thanks!
Yes, you can use buffer in conjunction with inside to find points within 10 miles of something else, eg, expanding on the existing examples,
var pt = point(14.616599, -90.548630)
var unit = 'miles'
var buffered = buffer(pt, 10, unit)
var ptTest = point(-1, 52)
var bIn = inside (ptIn, buffer)
which should obviously be false.
In general, though, buffering is somewhat expensive, so you would not necessarily want to do this every time you run the query. There are a couple of things you can do to speed things up:
1). Pre-buffer your search areas
2). Use some kind of R-tree type index, which will first check bounding box intersection, and avoid lots of unecessary point in polygon operations. turfjs, which I hadn't heard of until seeing your post, uses jsts under the hood for a number of operations, including buffering. This library has an implemention of R-tree indexes that you could potentially use. Here is a fun example of this being done.
In general, in situations where you have a spatial (R-tree type) index in place, such as a spatially enabled database like Postgis on top of Postgres, you would use an operator like ST_Dwithin (geom1, geom2, distance) in a where clause to find all points within some distance of another geometry, and this would be very efficient as many candidates would be rejected for failing an initial bounding box test.
Really, it depends on the size of your data and frequency of queries. There is nothing, in principle, wrong with doing contains queries on a buffer. I hope I haven't created more questions than answers.
I'm using GeoScript to do that sort of calculations in JavaScript. It has a distance method in the geom.Geometry class which can return the minimum distance between two geometries. You could use that, or take a look at the source on GitHub to see how they do it if you want to roll your own solution.
I am going to build an interactive Choropleth map for Bangladesh. The goal of this project is to build a map system and populate different type of data. I read the documentations of the Openlayers, Leaflet and D3. I need some advice to find the right path. The solution must be optimized enough.
The map i am going to create will be something like the following http://nasirkhan.github.io/bangladesh-gis/asset/base_files/bd_admin_3.html. It is prepared based on leaflet js. But it is not mandatory to work with this library. I tried with Leaflet because it is easy to use and found the expected solution within a very short time.
The requirement of the project is to prepare a Choropleth map where i can display the related data. for example i have to show the population of all the divisions of Bangladesh. at the same time there should be some options so that i can show the literacy rate, male-female ratio and so on.
the solution i am working now have some issues. like the load time is huge and if i want to load the 2nd dataset then i have to load the same huge geolocation data, how can i optimize or avoid this situation?
Leaflet has a layers control feature. If you cut down your data to just what is required, split it into different layers and allow the user to select that layers they are interested in viewing that might cut down on the loading of the data. Another option is to simplify the shape of the polygons.