My goal is to show multiple (small) panes of video on-screen simultaneously.
I would prefer to use the hardware scalar. This is currently working well for a single video on a single surface. For multiple streams it appears multiple SurfaceViews are needed - I don't see a way to use the hardware scaler to blit multiple images into different parts of the same Surface. What's the best way to lock/blit image pixels to these surfaces?
ANativeWindow_unlockAndPost causes a wait-for-vsync + swap (I think?), so I can't call this per-SurfaceView in the same update cycle (well I can, but I get horrible jittering).
One alternative is to use a seperate render thread per SurfaceView. Does this seem like a sane avenue to pursue? Are there any other ways to update multiple SurfaceViews with a single wait-for-vsync+swap?
Related
I'm Oliver, a noob of web animation,these two days I'm trying to do gsap marquee side project, I build 500 dom boxes as the sandbox url:
https://codesandbox.io/s/gsap-marquee-test-6zx2d?file=/src/App.js&fbclid=IwAR1tbmloHRXHUBHKG5FjBGDAx0TFd9sTkBJfSwpye8CQteO-TO8FNi1w4mw
and I have few question:
1.I used setTimeout to seperate each box as a unique timeline animation,so that the single box animation could go to another line immediately after finished last line, instead of waiting the other 499 boxs finished in the same line if I use property stagger.
This method would produce 500 timeline instances,it seems not a good idea, are there any methods could produce the same animation in one or few timeline?
2.If I do such animation in canvas,the browser render effciency would be better?
You should avoid using setTimeout with GSAP as it's best to use GSAP to control the timing of things.
In this situation, you can probably make use of GSAP's staggers. You should also learn about the position parameter of GSAP's timelines. If you use one (or both, depending on the exact effect that you need) of these you should be able to avoid creating so many timelines.
Additionally, your animation is not responsive. You probably want to make use of functional properties (where your properties of a tween are functions, not just hard numbers) with timeline invalidation to make it responsive.
I also highly recommend going through the most common GSAP mistakes article as you're making some of them.
As for using canvas for rendering your boxes, it probably depends on what your boxes are like. In most cases it'd probably be faster to use canvas, yes. But the slow part of animating these boxes is not anything related to the animation functionality itself, per se. It's related to render speed. In general it's faster to render a bunch of objects to canvas than it is to render a bunch of DOM elements.
I have an application which renders many filled polygons with OpenGL, in 2D. Filling is done by tesselation but performance is not optimal. 1900 polygons made up of 122000 vertex (that is, about 64 vertex per polygon) are displayed in about 3 seconds.
Apparently, the CPU is not the bottleneck, as if I replace calls to gluTessVertex by calls to glColor - just to test where is the bottleneck, performance is doubled.
I have the same problem with loading many small textures.
Now, which are the options to improve the performance? Seems that most time is spend in the geometry subsystem. Rendering is fast enough.
I already have a worker thread which does the load (so tesselation, texture binding) in one context, and another thread which does the draw in another context. The two contexts share objects via wglShareLists and it works like a charm.
Can I have a third thread in a third context which would handle also tesselation for half of the polygons? Anyone tried that? Is it safe? Any example of sharing objects between three contexts?
Forgot to say, I have an ATI Radeon HD 4550 graphics card, suppose it can handle more than 39kB/s of data.
Increase Performance
Sounds like you're using the old fixed-function pipeline.
If you're unsure of what that is, well, the following functions are a part of the fixed-function pipeline.
glBegin()
glEnd()
glVertex*()
glTexCoord*()
glNormal*()
glColor*()
etc.
Those functions are old and render geometry immediately. That means that each time you call the above functions, that geometry gets send to the GPU. By doing that a lot of times, you can easy make the FPS go way under 60 just by rendering simple things.
Now you need to use buffers and to be more precise VAOs with/or VBOs (and IBOs).
VBO or Vertex Buffer Object, is a buffer which can store vertices which you then can render. This is much much faster and better to use than glBegin() and glEnd(). When you create a VBO you supply it with vertices and they only require to be send to the GPU once, that's basically why they are fast, because they already are in the GPU and only require a single draw call instead of multiple.
The reason I said "with/or" is because in the newer versions you need to create a VAO which then would use a VBO, where before you could simply render the VBOs.
Tessellation
There are multiple ways to do tessellation and things which look like/would give the effect of tessellation.
For instance you could also simply render different models according to the required LOD (Level of Detail), thereby when you're up close to an object you then render the model with all it's details which probably would have a high vertices count. Then the further you're away from the model you simply render another version of that model but which have less vertices, which also equals less detail. Though if you can't really do that on something like terrain and definitely shouldn't do it on something like dynamic terrain and/or procedurally generated terrain.
You can also do actual geometry tessellation and you would do that through a Shader. Since tessellation is a really huge topic I will provide you with 2 urls which both explain and have code on them.
Both of these articles uses modern 4.0/4.0+ OpenGL.
http://prideout.net/blog/?p=48
http://antongerdelan.net/opengl/tessellation.html
Texturing
Generating and binding textures are still the same.
Instead of using gluBuild2DMipmaps() you can use glGenerateMipmap(GL_TEXTURE_2D); it was added in OpenGL version 3.0'ish if I remember correctly.
Again you can (and should) change all you glBegin() - glEnd() (and everything in between) calls out with VAOs and VBOs. You can store everything you want inside a buffer vertices, texture coordinates, normals, colors, etc. You can store the things in separate buffers or you can store them inside a single buffer, usually called an Interleaved Buffer or Interleaved VBO.
You wouldn't be needing glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D) anymore, because you do that within a Shader, you bind textures and use them in a Shader, and since you create the Shader Program you can make it act however you want it to.
I am writing an iOS/Android game and looking for the most performant way to render my vertex data with OpenGL ES 2.0. I have two different kinds of data: dynamic data that changes its attributes every frame, for example the player or animated background objects, and static data such as the static background or the terrain. I googled a lot since yesterday, but I could not find a clear and unique answer to the question of what is the best was to render such data.
There are basically three options for rendering such data (If I do not miss one. If so, feel free to correct me.):
Vertex Arrays Only:
Just fill your vertex every frame on the CPU (including the dynamic data).
Vertex Buffer Objects Only:
Allocate a VBO on the GPU with GL_DYNAMIC_DRAW where both, the dynamic and static data is stored. The dynamic data is then updated every frame via glBufferSubData.
Use both:
Static data is stored and render with a VBO and the dynamic data is rendered with a Vertex Array. With this option, we need two rendering passes, one for rendering the VBO and one for rendering the vertex array.
Since the first option does not exploit the immutability of the static data and since the third option requires two rendering passes, my guess is that I should go with the second option. However, I am absolutely not sure about this and I hope you can clarify my confusion.
Allocate two Vertex Buffer Objects. One with hint GL_DYNAMIC_DRAW that will be updated frequently. Allocate a second VBO for immutable data and use the hint GL_STATIC_DRAW. According to the API documentation, GL_STATIC_DRAW should be used for data that "will be modified once and used many times"; just what you need.
Speaking of two rendering passes here is probably a misuse of the term: what you do is to render your scene in two separate drawing commands. Since drawing commands run asynchronously, you should not expericence any performance hit by doing so.
A second rendering pass, on the other hand, is when you render the entire scene twice (see for example here) with different settings, or when you do some image processing effects on outputs of previous rendering passes.
I'm trying to add multiple overlays of about 1500 on map view. I'm getting the locations from database and adding them on map view. The time to get data from database is very low but the time it takes to draw them on map is very high which is about 30 sec and I want to add overlays based on zoom levels, like level<4 1000 overlays, >=4 2000 overlays, redrawing these overlays screwed me. Please show me the solution to add them in a less amount of time.
I've had another problem with multiple overlays, it is causing memory issues on an actual device (Not the sim). The solution to this was creating one overlay from all. This might also be the solution to your problem as drawing the "combined-overlay" should be a lot faster:
The credits go to this answer and the code provided on the Apple Dev-forum
You then should be able to create one overlay from all and then draw that one overlay on the map.
Basically you create a class that handles the multiple overlays and draws them together onto on OverlayView
I need to render some CPU generated images in Direct3D 9 and I'm not sure of the best way to get the texture data onto the graphics card as there seems to be a number of approaches.
My usage path goes along the following lines each frame
Render a bunch of stuff with the textures
Update a few parts of the texture (which may have been used by the previous renders)
Render some more stuff with the texture
Update another part of the texture
and so on
Ive thought of a couple of ways to do this, however I'm not sure which one to go with. I considered benchmarking each method however I have no way to know if any results I get are representative of hardware in general, or only my hardware.
Which pool is best for a texture for this task?
Whats the best way to update this texture?
Call LockRect and UnlockRect for each region I need to update
Call LockRect and UnlockRect for the entire texture
Call LockRect and UnlockRect for the entire texture with D3DLOCK_DISCARD and copy in a bitmap from RAM.
Create a completely new texture each time I need to "update it"
Use 1,2 or 3 to update a surface in D3DPOOL_SYSMEM, then UpdateSurface to update level 0 of my texture from this surface
Same as 5 but specify RECT to cover the entire area I need
Same as 5 but make multiple calls, one for each region I updated
Probably yet another way to do this I haven't thought of yet...
It should be noted that the areas I'm updating are usually fairly small compared to the size of the entire texture, eg the texture may be 1024*1024 and I might want to update 5 or so 64*64 regions of it.
If you need to update multiple areas, you should lock the whole texture and use the D3DLOCK_NO_DIRTY_UPDATE flag, then for each area call AddDirtyRect before unlocking.
This of course all depends on the size of the texture etc, for small texture it may be more efficient to copy the whole thing from ram.
D3DPOOL_DEFAULT
D3DUSAGE_DYNAMIC
call LockRect and UnlockRect for each region you need to update
--> This is the fastest!
Benchmark will follow...