I am working with geo spatial data (quite big one). Now the large data polygons are distributed on the map and i need to sub divide the area of interests (Where i have data) into tiles to further work on them.
Currently i am using postgis method ST_Square that creates the polygons of required size. Command to create tiles in postgis
ST_SquareGrid(0.2,geom)
geom is the polygon that i need to sub devide and 0.2 is the size in meters in which i am creating the tiles. I am using the projection 3857.
This functions seems to work completely fine until i found that its creating tiles for the area where no data is available. As shown in the image below. Only 3 squares should be there as 4th square does not have any data in it.
Can you help me control this so that it only creates the tiles where data is presents and avoid creating empty tiles.
(The solution can be in GeoDjango or PostGis)
You could use DELETE in a subsequent step after generating the square grid. The idea is to remove those that do not intersect with your initial features. You have different approaches to solve this. The first uses spatial indices if they exist as it making use of ST_Intersects.
DELETE FROM grid
WHERE grid.gid NOT IN (
SELECT grid.gid
FROM grid, features
WHERE ST_Intersects (
grid.geom,
features.geom
)
);
Whereas you could also use this query which makes use of ST_Disjoint and doesn't use spatial indices, i.e. its performance will be slower.
DELETE FROM grid
WHERE ST_Disjoint(
geom, (SELECT geom FROM features)
);
Related
I am beginner in modelling. Can I increase object's polygen in 3dsmax? I want have smooth object that have not low polygens.
You can increase the polygon count of you model so many ways:
Use subdivide modifier.
Use the turbo smooth modifier.
Use tessellate modifier.
You can use TurboSmooth modifier for this purpose, but before use it, you should make sure your model have enough edges and check if the edges are correct. for example, make sure your vertex not connect to odd number edges( like 3 or 5 edges ) always try to keep it in even numbers, 4 edges is standard, check image below :
both of them used odd numbers ( 3 and 5 ) which is not correct.
try to add more edges to your model and Chamfer them if necessery, before applying turbosmooth or other smooth Modifier's.
You can smooth it by adding a smooth or turbo smooth modifier, then convert it to polygon edit again - or collapse the object class from modify tab - this will reproduce polygon evenly all over the object faces.
Alternately you can select a ring or loop of faces then use the connect tool to add more resolution (polygons) to these selected polygons only
As for the fish model in your image, you can select one of the height faces, then from the polygon edit section in the modify panel on the right, click ring it'll select all the height faces all around the fish, then scroll down on the modify panel click on connect and set the number to add more resolution
UPDATE:
Please take in consideration that there's nothing like infinite segments even spheres (balls) or cylinders in 3D just have a higher number of segments - usually 32+), you can double or triple it but not more, increasing objects segments for very high values can bring your computer down to its knees while modeling or rendering
New versions of 3dsmax (2015, 2016) have a new subdivision modifier called OpenSubDiv modifier. This can be used to subdivide your model, and give it more polygons. There is also the Turbosmooth, Meshsmooth, Tessellate, and Subdivide modifiers available. All of these will add more geometry to your model, based on different algorithms.
I am looking to use Cassandra for a nearby search type query. based on my lon/lat coordinates I want to retrieve the closest points. I do not need 100% accuracy so I am comfortable in using a bounding box instead of a circle (better performance too), but I can't find concrete instructions (Hopefully with an example) how to implement a bounding box.
From my experience, there's no easy way to have a generic geospatial index search on top of Cassandra. I believe you only have two options:
Geohashing, split your dataset into square/rectangular elements: for example, use integer parts of lat/lon as an indexes in a grid. Upon doing search, you can load all elements in an enclosing grid element and perform full neighbour scan inside your application.
works well if you have an evenly distributed dataset, like grid points in NWP similation that I've had.
works really bad on a datasets like "restaurants in USA", where most of the points are herding around large cities. You'll have unbalanced high load on different grid elements like New York area and get absolutely empty index buckets located somewhere in the Atlantic Ocean.
External indexes like ElasticSearch/Solr/Sphinx/etc.
All of them have geospatial indexing support out-of-the-box, no need to develop your own in your application layer.
You have to setup a separate indexing service and keep cassandra/index data in sync. There's some cassandra/search integrations like DSE (commercial), stargate-core (I've never heard about anyone using this in production), or you can roll your own, but all of these require time and effort.
This issue was touched on in the Euro Cassandra Summit in 2014.
RedHat: Scalable Geospatial Indexing with Cassandra
The presenter explains how he created a spatial index using User Defined Types that is very suitable to querying geospatial data using a region or bounding box based lookup.
The general idea is to break up your data into regions that are defined by bounding boxes. Each region then represents a rowkey, which you can then use to access any data associated with that region. If you have a location of interest, you query the keyspace on the regions which fall inside that area.
Back story: I'm creating a Three.js based 3D graphing library. Similar to sigma.js, but 3D. It's called graphosaurus and the source can be found here. I'm using Three.js and using a single particle representing a single node in the graph.
This was the first task I had to deal with: given an arbitrary set of points (that each contain X,Y,Z coordinates), determine the optimal camera position (X,Y,Z) that can view all the points in the graph.
My initial solution (which we'll call Solution 1) involved calculating the bounding sphere of all the points and then scale the sphere to be a sphere of radius 5 around the point 0,0,0. Since the points will be guaranteed to always fall in that area, I can set a static position for the camera (assuming the FOV is static) and the data will always be visible. This works well, but it either requires changing the point coordinates the user specified, or duplicating all the points, neither of which are great.
My new solution (which we'll call Solution 2) involves not touching the coordinates of the inputted data, but instead just positioning the camera to match the data. I encountered a problem with this solution. For some reason, when dealing with really large data, the particles seem to flicker when positioned in front/behind of other particles.
Here are examples of both solutions. Make sure to move the graph around to see the effects:
Solution 1
Solution 2
You can see the diff for the code here
Let me know if you have any insight on how to get rid of the flickering. Thanks!
It turns out that my near value for the camera was too low and the far value was too high, resulting in "z-fighting". By narrowing these values on my dataset, the problem went away. Since my dataset is user dependent, I need to determine an algorithm to generate these values dynamically.
I noticed that in the sol#2 the flickering only occurs when the camera is moving. One possible reason can be that, when the camera position is changing rapidly, different transforms get applied to different particles. So if a camera moves from X to X + DELTAX during a time step, one set of particles get the camera transform for X while the others get the transform for X + DELTAX.
If you separate your rendering from the user interaction, that should fix the issue, assuming this is the issue. That means that you should apply the same transform to all the particles and the edges connecting them, by locking (not updating ) the transform matrix until the rendering loop is done.
i want to import a set of 3d geometries in to current scene, the imported geometries contains tons of basic componant which may represent an
entire building. The Product Manager want the entire building to be displayed
as a 3d miniature(colors and textures must corrosponding to the original building).
The problem: Is there any algortithms which can handle these large amount of datasin a reasonable time and memory cost.
//worst case: there may be a billion triangle surfaces in the imported data
And, by the way, i am considering another solotion: using a type of textue mapping:
1 take enough snapshots by the software render of the imported objects.
2 apply the images to a surface .
3 use some shader tricks to perform effects like bump-mapping---when the view posisition changed, the texture will alter and makes the viewer feels as if he was looking at a 3d scene.
----my modeller and render are ACIS and hoops, any ideas?
An option is to generate side views of the building at a suitable resolution, using the rendering engine and map them as textures to a parallelipipoid.
The next level of refinement is to obtain a bump or elevation map that you can use for embossing. Not the easiest to do.
If the modeler allows it, you can slice the volume using a 2D grid of "voxels" (actually prisms). You can do that by repeatedly cutting the model in two with a plane. And in every prism, find the vertex closest to the observer. This will give you a 2D map of elevations, with the desired resolution.
Alternatively, intersect parallel "rays" (linear objects) with the solid and keep the first endpoint.
It can also be that your modeler includes a true voxel model, or that rendering can be zone with a Z-buffer that you can access.
Scenario: I have a large dataset, with each entry containing a location (x,y - coordinates).
I want to be able to request every entry from this dataset that is within 100m within this dataset and have it returned as an array.
How does one go about implementing something like this? Are there any patterns or framework that recommended? I've previously only worked with relational or simple key-value type data.
The data structure that solves this problem efficiently is a k-d tree. There are many implementations available, including a node.js module.
Put your data set into PostgreSQL and use an R-Tree index. You can then do a bounding box query to get all points with +-100 miles of any locations. Then calculate the radial distance and accept points within 100 miles. You can roll your own schema and queries or use PostGIS.
Unlike R-Trees KD-trees are not inherently balanced. So depending on how a KD-Tree is built you can get inconsistent performance due to unbalanced trees and the longest path.