Nested Polygons and Balloons - kml

There is polygon inside a bigger polygon in Google Earth. Smaller polygon also has descriptive balloon, that shows info when i click on it.. But, if I add balloon to bigger polygon, the smaller polygon just isn't accessible, i.e, no balloon shows up in there??
Is there a way to set the zoom scaling to fix that a smaller polygon is visible afterward this scale. And when clicked, the smaller polygon's balloon is showed. Any references to kick start this approach?
Is is possible alone in Google earth?

Yes, you can achieve this by nesting Kml regions. Basically you would wrap each polygon in a region. Each region would have a set of LOD (Level of detail) limit to specify the required size (zoom/range) for the associated Region to be active. This way the large polygon would be visible and active at a certain distance, then at a closer distance, the smaller polygon would be visible and active.
This is possible to do directly in Google Earth by manipulating/creating the Kml as desired. Alliteratively you could write the kml in any text editor and simply load it into the application.
For more information see - Nesting Regions
http://code.google.com/apis/kml/documentation/regions.html#nestingregions

Related

live drawing on a svg file, coordinates problem

I am having some trouble drawing real-world objects on an SVG map.
context:
I have a map which I converted to an svg file (inkscape), this map is then displayed on a web-page with 100% width/height.
I then want to draw points on this map, those points have coordinates in mm (on a very different and bigger scale), so I need to apply a scale factor and a conversion to... pixels?
that's where the difficulty is for me, SVG file uses "user units" measure system, it is then drawn scaling everything to the frame where it is loaded, I would like to scale my real-world point coordinates system to a "user units"-like reference system so that such points can be dynamically drawn on the page.
the web page is html/svg + javascript and I am using svg.js library to draw everything on it.
any clue about how to make ma transformation to align everything up?

How to control KML icon drawing order, top to bottom

I'm displaying many overlapping icons in a Google Earth tour. I'd like to control (or at least understand) the order in which the icons are drawn (which one shows on "top"). Thanks!
PS. Non solutions attempted: Using gx:drawOrder (it applies to overlays, but not icons). Using AnimatedUpdate to establish the order chronologically. Using the order in which you introduce the placemarks to establish their drawing order.
Apparently Google Earth draws the features in groups by type: polygons, then ground overlays, followed by lines and point data where drawOrder is applied only within a group. ScreenOverlays are drawn last so they are always on top.
If you define gx:drawOrder or drawOrder on a collection of features, it only applies to the features of the same type (polygon and other polygons) not between different types.
That is the behavior if the features are clamped to ground. If features are at different altitudes then lower altitude layers are drawn first.
Note that the tilt angle affects the size of the icon and as the tilt approaches 90 degrees, the size of the icon gets smaller. The icon is at largest size when viewed straight-down with 0 degree tilt angle.

ProcessingJS performance with large data

My goal is to create an interactive web visualization of data from motion tracking experiments.
The trajectories of the moving objects are rendered as points connected by lines. The visualization allows the user to pan and zoom the data.
My current prototype uses Processing.js because I am familiar with Processing, but I have run into performance problems when drawing data with greater than 10,000 vertices or lines. I pursued a couple of strategies for implementing the pan and zoom, but the current implementation, which I think is the best, is to save the data as an svg image and use the PShape data type in Processing.js to load, draw, scale and translate the data. A cleaned version of the code:
/* #pjs preload="nanoparticle_trajs.svg"; */
PShape trajs;
void setup()
{
size(900, 600);
trajs = loadShape("nanoparticle_trajs.svg");
}
//function that repeats and draws elements to the canvas
void draw()
{
shape(trajs,centerX,centerY,imgW,imgH);
}
//...additional functions that get mouse events
Perhaps I should not expect snappy performance with so many data points, but are there general strategies for optimizing the display of complex svg elements with Processing.js? What would I do if I wanted to display 100,000 vertices and lines? Should I abandon Processing all together?
Thanks
EDIT:
Upon reading the following answer, I thought an image would help convey the essence of the visualization:
It is essentially a scatter plot with >10,000 points and connecting lines. The user can pan and zoom the data and the scale bar in the upper-left dynamically updates according to the current zoom level.
Here's my pitch:
Zoom level grouping and break down data as your users focuses/zooms-in
I would suggest you group together some data and present it as simple node
On zooming in to a particular node you can break down the node and release the group thus showing it's details.
This way you limit the amount of data you need to show on zoomed-out views (where all the nodes would be shown) and you add details as the user zooms-in to a region - in which case not all nodes would be showing since zooming in only focuses on one area of your graph
Viewport limit
Detect what is in the current view area and draw just that. Avoid drawing the whole node graph structure if your user cannot see it in his viewport - Show only what is necessary. Although I suspect that this is already done by Processing.js anyway, I don't know if your zooming functionality takes advantage of this.
Consider bitmap caching if your nodes are interactive/clickable
If your elements are clickable/interactive you might want to consider grouping data and showing them as bitmaps(large groups of data showed as a single image), until the user clicks on a bitmap in which case the bitmap is removed and the original shape is re-drawed in that bitmaps place. This minimizes the amount of points/lines the engine has to draw on each redraw cycle.
For bitmap caching see this link,(this is Fabric.js - a canvas library and SVG but the concept/idea is the same) and also this answer I posted to one of my questions for interactive vector/bitmap caching
As a side note:
Do you really need to use Processing?
If there's no interaction or animation happening and you just want to blit pixels(just draw it once) on a Canvas, consider abandoning a vector based library altogether. Plain-old canvas just blits pixels on a canvas and that's all. The initial startup drawing of data might have some delay, but since there's not any internal reference to the points/shapes/lines after they were drawn - there's nothing eating up your resources/clogging your memory.
So if this is the case - consider making the switch to plain Canvas.
However data visualisations are all about animations and interactivity so I doubt you'll want to give them up.

How to construct ground surface of infinite size in a 3D CAD application?

I am trying to create an application similar in UI to Sketchup. As a first step, I need to display the ground surface stretching out in all directions. What is the best way to do this?
Options:
Create a sufficiently large regular polygon stretching out in all directions from the origin. Here there is a possibility of the user hitting the edges and falling off the surface of the earth.
Model the surface of the earth as a sphere/spheroid. Here I will be limiting my vertex co-ordinates to very large values prone to rounding off errors. (Radius of earth is 6371000000 millimeter).
Same as 1 but dynamically extend the ends of the earth as the user gets close to them.
What is the usual practice?
I guess you would do neither of these, but instead use a virtual ground.
So you just find out, what portion of the ground is visible in the viewport and then create a plane large enough to fill that. With some reasonable maxiumum, which simulates the end of the line of sight aka horizon as we know it.

My iOS Views are off half a pixel?

My graphics are looking blurry unless I add or subtract a half pixel to the Y coordinate.
I know this is a symptom that usually happens when the coordinates are set to sub-pixel values. Which leads me to believe one of my views must be off or something.
But I inspected the window, view controller and subviews, and I don't see any origins or centers with sub-pixel values.
I am stumped, any ideas?
See if somewhere you are using the center property of a view. If you assign that to other subviews, depending on their sizes they may position themselves in half pixel values.
Also, if you are using code to generate the UI I would suggest using https://github.com/domesticcatsoftware/DCIntrospect. This tools allows you in the simulator to look at all the geometry of visible widgets. Half pixel views are highlighted in red vs blue for integer coordinates. It helps a lot.

Resources