Is there a way to show/hide objects on a map based on the zoom level in azure indoor maps module? Honestly, I'm not so sure if this feature even exist yet
Custom styling of indoor maps in Azure Maps is a planned feature.
Note that indoor maps leverage vector tiles for rendering and items that don't appear when zoomed out, are not loaded in the map, so you won't be able to show things that aren't there at this time. There will likely be some configuration for this in the future. Things that do appear today could potentially be hidden although in a bit of a hacky way since this custom styling of indoor maps isn't officially supported yet.
For example, using the building from the indoor maps tutorial, the following sets the zoom level range of the room number labels to 0 - 22.
map.map.setLayerZoomRange("indoor_global_unit_label", 0, 22)
The first value in that function is the id of the rendering layer which I retrieved by running the following code in the console, then moving the mouse over the item I wanted to get the id for:
map.events.add('mousemove', function (e) { console.log(e.shapes[0].layer.id ) })
The second and third parameter of the setLayerZoomRange method is the min and max zoom levels. When that line of code is ran, you will notice the labels appear when zoomed out much longer than usual, however, if you zoom out enough, they disappear since they become no longer available in the vector tiles.
Related
Is there an easy way to show 3D entity at all times, even when that entity is hidden behind another entity? For example, I want that lines are always shown event when they are behind mesh surface.
I use Qt3D framework.
Assuming that you are talking about the Qt3D framework I want to extend the answer of Rabbid76.
To disable depth-testing in the Qt3D framework, add a QRenderStateSet to the framegraph branch that renders things (the one that as a QViewPort for example) and add a QDepthTest to it. Then, set the depth function of the QDepthTest to always. This way, the depth test is always passed and entities in the back will also be drawn, depending on the drawing order. You can use QSortPolicy to adjust the drawing order to back-to-front.
But this won't work when the camera position changes and your entity that you always want to be drawn is in the front. I'd suggest you add another framegraph branch and use a QLayerFilter to only deactivate depth-testing for this one entity.
If your entity looks weird when deactivating depth-testing (likely for complex objects), you could replace the QDepthTest by a QClearBuffers and simply clear the depth buffer.
Have a look at my answer here, where I showed an example of a custom framegraph with depth test.
If the depth test is disabled, then the geometry (like a line) is always drawn on top of the previously drawn geometry. The depth test can be disabled by:
glDisable(GL_DEPTH_TEST)
See glEnable
As an alternative the depth test function, can be set to let a fragment always pass depth test. In Qt this can be done by the class QDepthTest, using the enumerator constant Qt3DRender::QDepthTest::Always.
In this case, you have to take care about the order in which the geometry is drawn.
You have to find a way, to render the polygons (opaque geometry) first, by using the depth test function Qt3DRender::QDepthTest::Less.
After that you have to render the lines on top, by using the depth test function Qt3DRender::QDepthTest::Always.
I'm working on a new W10 UWP app, and ran into a small issue.
I'm generating tiles with a separate Tile service - it creates secondary tiles, and updates them when required (and also updates the primary tile).
However, for some reason, even though the Wide and Medium templates are used, the sizes the tile offers when right-clicked are just "Small", and "Medium".
I found no reference on how to enable Wide and Large tile options. Does anyone know?
Okay, so the solution is simple...
It's not enough to add the Live Tile templates, you have to manually specify the sizes by adding the proper graphics Uris to the SecondaryTile instance. So if you want a wide tile to be available, assign a value to
tile.VisualElements.Wide310x150Logo
and so on, before calling
await tile.RequestCreateAsync()
I have a large dataset of points and to stop from showing them all at the same time i'm using nested regions. These regions have network links in them to a server which generates kml files of points and more network links. I'll show x points at the outermost region. That region is made up of smaller regions each showing x points etc. I don't want to show each point more than once. Each point is eligible under many different regions. How can i determine if i've already shown a point?
Ideas:
Use a deterministic algorithm to pick out the points for a level so each region can reverse calculate what points would already have been selected. Problem is that as soon as a point gets added to the database, the algorithm is no longer going to work as the parameters have changed. Also sounds sloooow.
store in the db upon population what region a point should be shown at based on some algorithm. Problem is have to decide where to put a point every time you add one. (not that bad initially assign regions for all points though)
Edit: Or instead of using accumulative regions i could set the maximum LOD for each region.
Con is lots of points could be loaded multiple times. Pro, should be pretty simple / updating DB when already in google earth won't hurt things
Edit Edit: Tried using exclusive regions (https://developers.google.com/kml/documentation/images/LodRange3.gif) Ran into some problems. Namely, if you set there is slight overlap where one region ends (max LOD) and another starts (min LOD) could make it so there's a gap, but it will never be perfect. Also this is okay if you start fully zoomed out so the outermost region is 'active', then as you zoom in somewhere the regions will become active. Problem is if you start zoomed in, then your your outer region is never loaded, so it never loads any of it's subregions and the placemarks won't be displayed. Generating all the possible regions from the start (just not displaying them) is not an option as there would be millions of files
I was wondering how does Nike website make the change you can see when selecting a color or a sole. At first I thought they were only using images and when the user picked a color you just replaced that part, but when I selected a different sole I noticed it didn't changed like an image it looked a bit more as if it was being rendered. Does anybody happens to know how this is made? Or where can I get further info about making this effect :)?
It's hard to know for sure, but my guess would be that they're using a rendering service similar to that provided by Adobe's Scene7.
It's a product that is used to colorize/customize a base product image based on user choices.
If you're interested in using the service, I'd suggest signing up for their weekly webinar. I attended one a while back and was very impressed with their offering. They showed the Converse site (which had functionality almost identical functionality to the Nike site) as a demo.
A lot of these tools are built out in Flash using a variety of techniques:
1) You can use Flash's BitmapData object to directly shift the hues of the pixels in your item. This is probably the simplest technique but often limits you to simple color transformations.
2) You can pre-render transparent PNG's (or photos, I guess) containing the various textures you would want to show on your object (for instance patterns or textures) and have them dynamically added to your stage at runtime. This, I think, offers the highest fidelity but means you need all of your items rendered upfront.
3) You can create 3D collada files and load them via a library like Papervision3D. Then dynamically change the texture at runtime. This is the most memory intensive technique and tends to result in far worse fidelity, but for that you get a full 3D object that you can view in space.
I'm sure there are other techniques but those are the top 3 I can think of. I hope that helps!
I'm trying to generate a KML file to display a set of features scattered around the UK. I would like the features to be grouped together at higher zoom levels, ideally displaying as an icon with a count of the number of features, so that users can see clusters of features easily.
Essentially I'm trying to do something along these lines, but in Google Earth, not Maps.
Can anyone point me in the right direction. I'm a bit of a newbie with KML :-)
Cheers,
RB.
ANSWERS :
My own research suggests I can do what I want using Regions to define bounding boxes for certain features.
It has also been suggested I should do this using network links, which I'm going to investigate as I think it's a better match for other reasons too.
Is this a standalone KML file? Or the KML returned as data for a network link?
In the first case I'm not sure this is even possible. I have seen layer transparency change with "camera altitude", so perhaps something like this is also possible on features? Then you could add both the single features and the groups features into the same KML file and make them visible based on "distance to camera"? Could be a new KML feature I missed, but you'd have the check the KML specification.
In the second case, you just return KML that matches the given network link viewport information. Based on the bounding box you get, you can subdivide that box into a grid and cluster per box. If you have one feature in a box, return the feature. If you have more than one in a box, return just a "grouped feature" for that box. The clustering will then automatically change when the user moves around in Google Earth: after each camera change your network link URL is called again and you again do feature selection and clustering with the given bounding box viewport. This makes your clustering dynamic.
Does this help?