Fusion Tables: Polygon not displayed as of certain zoom level - kml

I'm working on a map that shows different population statistics on a rather granular level in Berlin (447 sub-districts).
https://www.google.com/fusiontables/data?docid=1tIAPGaYK1iEWWLANQOupkAqCcPhVauMjdPS1qOs#map:id=3
For some reason, a small number of polygons (3) is not displayed as soon as you zoom into the map (12 or higher).
As the polygons are displayed at the level before, they should have the proper coordinates. I first thought the shapefiles (kmls provided by the local statistics authority) might be buggy, but that does not seem to be the case.
Can anybody explain to me why this happens?
Thank you very much!
Michael

There are two possibilities that I can think of:
it is a complexity problem or a winding direction issue with the polygon. Thread on Fusion Tables Users Group discussing this issue.
it is a complexity issue with the number of "features" on the tile. See Limits in the documentation, it used to be more clearly defined.
Reversing the winding direction of two of the problem polygons seems to fix the issue:
https://www.google.com/fusiontables/DataSource?snapid=S787935DQC4

Related

3D bounding box - is there a simple method for making one?

Is there a simple/quick way to find out the dimensions of a 3D axis aligned bounding box for an item with rotations in the <x,y,z> axes? I know that in math there are different methods/equations to come up with the same answer for a problem and I've previously in the past explained with bare bones information about a method that worked for me, that I probably didn't explain well.
I also know that there are (downloadable)functions available for some languages like java and python to help solve/tackle parts of the math. Though that might not help those that want to understand those different methods better or might not have access to those particular functions for the language they are using.
So if a person has the following information:
The item center point coordinate in the x,y,z
The item x,y,z size
Along with the x,y,z rotation(s)
What would be the easiest approach with a person knowing that information, while either using or not using the item center point, matrix/matrices or cos and sin. Though an approach that would also be compact enough to take up the least amount of memory usage.

Finding all geohashes within two bounding coordinates

I have coordinates, which are assigned a corresponding geohash in my database. Now I want to retrieve all of the coordinates within two bounding coordinates (top right and top left corner). How can I do that properly?
I tried getting the geohash that fits both of those bounding coordinates, but this solution does not work when they are in completely different regions of the world (so they are not sharing anything in common).
Is there a better way to do that?
Thanks for your help
Unfortunately, this isn't something you can do out-of-the-box with datastore / App engine. (There are no built in spatial queries.)
For early prototyping, etc., you can do it the hard way - retrieve all the rows, and discard the ones not meeting your query in code. Obviously, probably not viable with real production data.
See related question Query for Entities Nearby with Geopt for some possible production solutions.

Algorithm for finding an empty space that fits a rectangle that is closest to a target rectangle among other rectangles

Let's say you're placing rectangular tooltips on a screen of elements you want to provide information for. You want all these tooltips to be visible all at once and not cover any of the nodes any of the other tooltips are for.
You want each tooltip to be as close to the item its related to as feasible. What algorithm(s) exist to help solve this problem?
I've checked out rtrees, which seem to only help you find collisions, but don't help on the front of actually searching for free locations. I've found rectangle packing algorithms that search for a position unconstrained by a maximization function (like "be closest to this other element as possible").
I can imagine an algorithm that has some physics simulation where nodes and their tooltips are each connected by some kind of rubber band and plays it out until equilibrium, but I'd think that things could be calculated faster and less complicated than that.
Any related algorithms or libraries would be helpful. Bonus points for a javascript library : )
You might investigate map labeling algorithms.
See, for example, these lecture notes by Robero Tamassia #Brown:
PDF download.

How can I select layer by location AND attributes in ArcMap?

I have a dataset (i.e. a shapefile) containing spatial location data (coordinates) and elevation data as well as other attribute fields.
I want to select points which have at least 200m vertical separation (i.e. are at least 200m apart on the z-axis) AND are within 3km of each other.
The aim is to create a new shapefile with all points that have this relationship with 1 or more other points.
Im sure there is a solution to this problem (maybe not using arcmap at all?) but i just cant find it. any help would be greatly appreciated.
Chris
You are going to have much better luck asking this question in gis.stackexchange.com. Many more ESRI users/programmers there. As a matter of fact I bet you find your solution there without having to ask the question.
You can run the ArcGIS Near tool on all the points.
Then select by attribute points with Z values of >200m and distance values of <3000m.

What is the best approach to compute efficiently the first intersection between a viewing ray and a set of objects?

For instance:
An approach to compute efficiently the first intersection between a viewing ray and a set of three objects: one sphere, one cone and one cylinder (other 3D primitives).
What you're looking for is a spatial partitioning scheme. There are a lot of options for dealing with this, and lots of research spent in this area as well. A good read would be Christer Ericsson's Real-Time Collision Detection.
One easy approach covered in that book would be to define a grid, assign all objects to all cells it intersects, and walk along the grid cells intersecting the line, front to back, intersecting with each object associated with that grid cell. Keep in mind that an object might be associated with more grid-cells, so the intersection point computed might actually not be in the current cell, but actually later on.
The next question would be how you define that grid. Unfortunately, there's no one good answer, and you need to consider what approach might fit your scenario best.
Other partitioning schemes of interest are different tree structures, such as kd-, Oct- and BSP-trees. You could even consider using trees combined with a grid.
EDIT
As pointed out, if your set is actually these three objects, you're definately better of just intersecting each one, and just pick the earliest one. If you're looking for ray-sphere, ray-cylinder, etc, intersection tests, these are not really hard and a quick google should supply all the math you might possibly need. :)
"computationally efficient" depends on how large the set is.
For a trivial set of three, just test each of them in turn, it's really not worth trying to optimise.
For larger sets, look at data structures which divide space (e.g. KD-Trees). Whole chapters (and indeed whole books) are dedicated to this problem. My favourite reference book is An Introduction to Ray Tracing (ed. Andrew. S. Glassner)
Alternatively, if I've misread your question and you're actually asking for algorithms for ray-object intersections for specific types of object, see the same book!
Well, it depends on what you're really trying to do. If you'd like to produce a solution that is correct for almost every pixel in a simple scene, an extremely quick method is to pre-calculate "what's in front" for each pixel by pre-rendering all of the objects with a unique identifying color into a background item buffer using scan conversion (aka the z-buffer). This is sometimes referred to as an item buffer.
Using that pre-computation, you then know what will be visible for almost all rays that you'll be shooting into the scene. As a result, your ray-environment intersection problem is greatly simplified: each ray hits one specific object.
When I was doing this many years ago, I was producing real-time raytraced images of admittedly simple scenes. I haven't revisited that code in quite a while but I suspect that with modern compilers and graphics hardware, performance would be orders of magnitude better than I was seeing then.
PS: I first read about the item buffer idea when I was doing my literature search in the early 90s. I originally found it mentioned in (I believe) an ACM paper from the late 70s. Sadly, I don't have the source reference available but, in short, it's a very old idea and one that works really well on scan conversion hardware.
I assume you have a ray d = (dx,dy,dz), starting at o = (ox,oy,oz) and you are finding the parameter t such that the point of intersection p = o+d*t. (Like this page, which describes ray-plane intersection using P2-P1 for d, P1 for o and u for t)
The first question I would ask is "Do these objects intersect"?
If not then you can cheat a little and check for ray collisions in order. Since you have three objects that may or may not move per frame it pays to pre-calculate their distance from the camera (e.g. from their centre points). Test against each object in turn, by distance from the camera, from smallest to largest. Although the empty space is the most expensive part of the render now, this is more effective than just testing against all three and taking a minimum value. If your image is high res then this is especially efficient since you amortise the cost across the number of pixels.
Otherwise, test against all three and take a minimum value...
In other situations you may want to make a hybrid of the two methods. If you can test two of the objects in order then do so (e.g. a sphere and a cube moving down a cylindrical tunnel), but test the third and take a minimum value to find the final object.

Resources