Adding multiple Overlays on map view takes more time - mkmapview

I'm trying to add multiple overlays of about 1500 on map view. I'm getting the locations from database and adding them on map view. The time to get data from database is very low but the time it takes to draw them on map is very high which is about 30 sec and I want to add overlays based on zoom levels, like level<4 1000 overlays, >=4 2000 overlays, redrawing these overlays screwed me. Please show me the solution to add them in a less amount of time.

I've had another problem with multiple overlays, it is causing memory issues on an actual device (Not the sim). The solution to this was creating one overlay from all. This might also be the solution to your problem as drawing the "combined-overlay" should be a lot faster:
The credits go to this answer and the code provided on the Apple Dev-forum
You then should be able to create one overlay from all and then draw that one overlay on the map.
Basically you create a class that handles the multiple overlays and draws them together onto on OverlayView

Related

Determine if point already shown on google earth

I have a large dataset of points and to stop from showing them all at the same time i'm using nested regions. These regions have network links in them to a server which generates kml files of points and more network links. I'll show x points at the outermost region. That region is made up of smaller regions each showing x points etc. I don't want to show each point more than once. Each point is eligible under many different regions. How can i determine if i've already shown a point?
Ideas:
Use a deterministic algorithm to pick out the points for a level so each region can reverse calculate what points would already have been selected. Problem is that as soon as a point gets added to the database, the algorithm is no longer going to work as the parameters have changed. Also sounds sloooow.
store in the db upon population what region a point should be shown at based on some algorithm. Problem is have to decide where to put a point every time you add one. (not that bad initially assign regions for all points though)
Edit: Or instead of using accumulative regions i could set the maximum LOD for each region.
Con is lots of points could be loaded multiple times. Pro, should be pretty simple / updating DB when already in google earth won't hurt things
Edit Edit: Tried using exclusive regions (https://developers.google.com/kml/documentation/images/LodRange3.gif) Ran into some problems. Namely, if you set there is slight overlap where one region ends (max LOD) and another starts (min LOD) could make it so there's a gap, but it will never be perfect. Also this is okay if you start fully zoomed out so the outermost region is 'active', then as you zoom in somewhere the regions will become active. Problem is if you start zoomed in, then your your outer region is never loaded, so it never loads any of it's subregions and the placemarks won't be displayed. Generating all the possible regions from the start (just not displaying them) is not an option as there would be millions of files

OpenGL geometry performance

I have an application which renders many filled polygons with OpenGL, in 2D. Filling is done by tesselation but performance is not optimal. 1900 polygons made up of 122000 vertex (that is, about 64 vertex per polygon) are displayed in about 3 seconds.
Apparently, the CPU is not the bottleneck, as if I replace calls to gluTessVertex by calls to glColor - just to test where is the bottleneck, performance is doubled.
I have the same problem with loading many small textures.
Now, which are the options to improve the performance? Seems that most time is spend in the geometry subsystem. Rendering is fast enough.
I already have a worker thread which does the load (so tesselation, texture binding) in one context, and another thread which does the draw in another context. The two contexts share objects via wglShareLists and it works like a charm.
Can I have a third thread in a third context which would handle also tesselation for half of the polygons? Anyone tried that? Is it safe? Any example of sharing objects between three contexts?
Forgot to say, I have an ATI Radeon HD 4550 graphics card, suppose it can handle more than 39kB/s of data.
Increase Performance
Sounds like you're using the old fixed-function pipeline.
If you're unsure of what that is, well, the following functions are a part of the fixed-function pipeline.
glBegin()
glEnd()
glVertex*()
glTexCoord*()
glNormal*()
glColor*()
etc.
Those functions are old and render geometry immediately. That means that each time you call the above functions, that geometry gets send to the GPU. By doing that a lot of times, you can easy make the FPS go way under 60 just by rendering simple things.
Now you need to use buffers and to be more precise VAOs with/or VBOs (and IBOs).
VBO or Vertex Buffer Object, is a buffer which can store vertices which you then can render. This is much much faster and better to use than glBegin() and glEnd(). When you create a VBO you supply it with vertices and they only require to be send to the GPU once, that's basically why they are fast, because they already are in the GPU and only require a single draw call instead of multiple.
The reason I said "with/or" is because in the newer versions you need to create a VAO which then would use a VBO, where before you could simply render the VBOs.
Tessellation
There are multiple ways to do tessellation and things which look like/would give the effect of tessellation.
For instance you could also simply render different models according to the required LOD (Level of Detail), thereby when you're up close to an object you then render the model with all it's details which probably would have a high vertices count. Then the further you're away from the model you simply render another version of that model but which have less vertices, which also equals less detail. Though if you can't really do that on something like terrain and definitely shouldn't do it on something like dynamic terrain and/or procedurally generated terrain.
You can also do actual geometry tessellation and you would do that through a Shader. Since tessellation is a really huge topic I will provide you with 2 urls which both explain and have code on them.
Both of these articles uses modern 4.0/4.0+ OpenGL.
http://prideout.net/blog/?p=48
http://antongerdelan.net/opengl/tessellation.html
Texturing
Generating and binding textures are still the same.
Instead of using gluBuild2DMipmaps() you can use glGenerateMipmap(GL_TEXTURE_2D); it was added in OpenGL version 3.0'ish if I remember correctly.
Again you can (and should) change all you glBegin() - glEnd() (and everything in between) calls out with VAOs and VBOs. You can store everything you want inside a buffer vertices, texture coordinates, normals, colors, etc. You can store the things in separate buffers or you can store them inside a single buffer, usually called an Interleaved Buffer or Interleaved VBO.
You wouldn't be needing glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D) anymore, because you do that within a Shader, you bind textures and use them in a Shader, and since you create the Shader Program you can make it act however you want it to.

multiple SurfaceView render targets and vsync

My goal is to show multiple (small) panes of video on-screen simultaneously.
I would prefer to use the hardware scalar. This is currently working well for a single video on a single surface. For multiple streams it appears multiple SurfaceViews are needed - I don't see a way to use the hardware scaler to blit multiple images into different parts of the same Surface. What's the best way to lock/blit image pixels to these surfaces?
ANativeWindow_unlockAndPost causes a wait-for-vsync + swap (I think?), so I can't call this per-SurfaceView in the same update cycle (well I can, but I get horrible jittering).
One alternative is to use a seperate render thread per SurfaceView. Does this seem like a sane avenue to pursue? Are there any other ways to update multiple SurfaceViews with a single wait-for-vsync+swap?

Best way to update a Direct3D Texture

I need to render some CPU generated images in Direct3D 9 and I'm not sure of the best way to get the texture data onto the graphics card as there seems to be a number of approaches.
My usage path goes along the following lines each frame
Render a bunch of stuff with the textures
Update a few parts of the texture (which may have been used by the previous renders)
Render some more stuff with the texture
Update another part of the texture
and so on
Ive thought of a couple of ways to do this, however I'm not sure which one to go with. I considered benchmarking each method however I have no way to know if any results I get are representative of hardware in general, or only my hardware.
Which pool is best for a texture for this task?
Whats the best way to update this texture?
Call LockRect and UnlockRect for each region I need to update
Call LockRect and UnlockRect for the entire texture
Call LockRect and UnlockRect for the entire texture with D3DLOCK_DISCARD and copy in a bitmap from RAM.
Create a completely new texture each time I need to "update it"
Use 1,2 or 3 to update a surface in D3DPOOL_SYSMEM, then UpdateSurface to update level 0 of my texture from this surface
Same as 5 but specify RECT to cover the entire area I need
Same as 5 but make multiple calls, one for each region I updated
Probably yet another way to do this I haven't thought of yet...
It should be noted that the areas I'm updating are usually fairly small compared to the size of the entire texture, eg the texture may be 1024*1024 and I might want to update 5 or so 64*64 regions of it.
If you need to update multiple areas, you should lock the whole texture and use the D3DLOCK_NO_DIRTY_UPDATE flag, then for each area call AddDirtyRect before unlocking.
This of course all depends on the size of the texture etc, for small texture it may be more efficient to copy the whole thing from ram.
D3DPOOL_DEFAULT
D3DUSAGE_DYNAMIC
call LockRect and UnlockRect for each region you need to update
--> This is the fastest!
Benchmark will follow...

Modelling an I-Section in a 3D Graphics Library

I am using Direct3D to display a number of I-sections used in steel construction. There could be hundreds of instances of these I-sections all over my scene.
I could do this two ways:
Using method A, I have fewer surfaces. However, with backface culling turned on, the surfaces will be visible from only one side. If backface culling is turned off, then the flanges (horizontal plates) and web (vertical plate) may be rendered in the wrong order.
Method B seems correct (and I could keep backface culling turned on), but in my model the thickness of plates in the I-section is of no importance and I would like to avoid having to create a separate triangle strip for each side of the plates.
Is there a better solution? Is there a way to switch off backface culling for only certain calls of DrawIndexedPrimitives? I would also like a platform-neutral answer to this, if there is one.
First off, backface culling doesn't have anything to do with the order in which objects are rendered. Other than that, I'd go for approach B for no particular reason other than that it'll probably look better. Also this object probably isn't more than a hand full of triangles; having hundreds in a scene shouldn't be an issue. If it is, try looking into hardware instancing.
In OpenGL you can switch of backface culling for each triangle you draw:
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
// or
glCullFace(GL_BACK);
I think something similar is also possible in Direct3D
If your I-sections don't change that often, load all the sections into one big vertex/index buffer and draw them with a single call. That's the most performant way to draw things, and the graphic card will do a fast job even if you push half a million triangle to it.
Yes, this requires that you duplicate the vertex data for all sections, but that's how D3D9 is intended to be used.
I would go with A as the distance you would be seeing the B from would be a waste of processing power to draw all those degenerate triangles.
Also I would simply fire them at a z-buffer and allow that to sort it all out.
If it get's too slow then I would start looking at optimizing, but even consumer graphics cards can draw millions of polygons per second.

Resources