Does using glBindAttribLocation improve performance? - graphics

My understanding is that glBindAttribLocation allows you to custom set a handle to an attribute (before linking a shader program), which you can later use when rendering with glVertexAttribPointer.
But you don't have to use it, and may instead just rely on OpenGL assigning whatever handle it so chooses in its infinite wisdom. However, you would then need to query OpenGL to find out this handle by using glGetAttribLocation at some point before rendering with glVertexAttribPointer.
Now you could use glGetAttribLocation each time you render, which would seem wasteful since you can just use glGetAttribLocation once after building your program, then store the handle.
So essentially, you can store this handle by either using glBindAttribLocation or by using glGetAttribLocation so is there any difference performance-wise and what are the pros and cons of one over the other?

I cannot speak much about the direct performance difference, but it should be irrelevant anyway, since no matter if using glBindAttribLocation or glGetAttribLocation, you're doing it at initialization time anyway (and even then calling glGetAttribLocation shouldn't hurt that much).
But the main difference and advantage of an explicit glBindAttribLocation over letting GL decide is, that it allows you to establish your own attribue semantics and keep them consistent for each and every shader.
Say you have a whole bunch of objects and a whole bunch of shaders. But each shader has some notion of a position attribute (and normal, color, ...), likewise each object has attribute data for positions, normals, ... Now with glBindAttribLocation you can bind your position attribute to location 0 in each and every different shader. So when drawing your objects with different shaders, they can use a single vertex format (i.e. how you call glVertexAttribPointer for the individual attributes, and the individual enable calls).
On the other hand glGetAttribLocation doesn't give you any guarantees about what attributes get which indices (maybe one shader has some additional attribute and the compiler thinks it's a good way to reorder them, who knows). So in this case you have a different vertex format (glVertexAttribPointer call) for each object and each shader.
This is even more important when using Vertex Array Objects (which encapsulate all the attribute state, especially the glVertexAttribPointer and glEnableVertexAttribArray calls). In this case you usually don't need (and don't want) to call glVertexAttribPointer each time you draw an object with another shader.
So the bottom line is, always use glBindAttribLocation, at best (in a large application) it saves you many object and shader management issues and many unneccessary glVertexAttribPointer calls each frame (and that can likely be a performance gain), and at least (in a very small application) it is good practice and lets you stay open and flexible for extensions. As a side note, in desktop GL 3+ (or with the ARB_explicit_attrib_location extension) you can even assign attribute locations directly in the shader without the need for any API call.

Related

VKRay/DXR: What information do you store in your "Payload" structure?

This will be an important design decision in my next ray-tracing project.
We know that the "Payload" structure is used for passing data from the closest-hit shader to the raygen shader. For a recursive PBRT system, basically, I have 2 options here:
Use it to store the ray-surface interaction information of the hit-point (be it a BRDF or a BSDF)
Use it to store the information of the next ray to trace
The 1st option is intuitive. There is a universal representation of ray-surface interaction. The closest-shaders returns that information to raygen shader, and the information is processed uniformly in the raygen shader to modify the integrated color and generate the next ray. A problem of this design is that the BSDF representation can be big and complicated in a PBRT system. For VKRay/DXR, we want the Payload to be as small as possible.
The 2nd option leaves the evaluation of ray-surface interaction to the closest-hit shaders. Each of the closest-hit shader can have its own representation of ray-surface interaction, which can be extremely simple (depending on the geometry representation). In this case, Payload stores the evaluated results, such as the information of the next ray to trace. A possible issue is that the closest-hit shaders are the diverged parts in the execution flow, would them be less efficient than the raygen shader?
Which one do you prefer, and why?
The short answer is it depends on it will depend on the hardware your running on and it's worth experimenting with. What we can say is method 1 will certainly have less memory footprint and probably have better ray gathering and better caching, but may also have higher bandwidth due to the increased payload size. Another advantage of method 1 is that you will not be limited by recursion depth.
Of course, method 2 is more flexible from a design perspective. Adding a new material/object would involve adding a new shader without modifying the raygen shader/payload definition.
If you are going for method 1 then you may also want to look into using "ray queries" which will bypass the need for a closest-hit shader altogether. You will effectively end op with a very bulky raygen shader. It may be less flexible but it should be faster.

Vulkan: attachment synchronisation with implicit layout transitions

I have read almost everything that google gave me to this topic and haven't been able to reach a satisfactory conclusion. It's essentially a follow up question to this one:
Moving image layouts with barrier or renderpasses
Assume I have a color attachment which is written to in one render pass and sampled from in a second one. Let there be only one subpass in both render passes. One way to handle the layout transition and dependencies is to add a barrier between the two render passes, which changes the layout from VK_IMAGE_LAYOUT_COLOR_ATTACHMENT_OPTIMAL to VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL.
But Vulkan also offers implicit layout transitions (vkAttachmentDescription, initialLayout and finalLayout). I guess that there is a performance advantage of using them, so let's simply try to get rid of our barrier. We set the initialLayout and finalLayout field in the vkAttachmentDescription structure and remove the barrier. The problem is, we lost the synchronisation provided by the barrier, so we need to get the synchronisation back by other means. And this is the point where the confusion starts, leading to my questions:
1) What's the recommended way to synchronize the attachment between the two render passes? Obviously I could simply re-add the barrier and not change the layout, but wouldn't that defeat the purpose of the whole exercise, which was to get better performance by using implicit layout transitions and getting rid of the barrier? Or should I add a subpass dependency from the single sub pass of render pass 1 to VK_SUBPASS_EXTERNAL? Are there any caveats of using VK_SUBPASS_EXTERNAL performance-wise?
2) What about synchronizing the attachment backwards? It is the application's responsibility to transition the attachment to the correct initial layout, which can be done with a barrier, obviously. Can this barrier be replaced to get a performance advantage? The only way I can think of would be to do the dependency part with a sub pass dependency from VK_SUBPASS_EXTERNAL to the single sub pass of render pass 1 and to use a 'fast' barrier (one that doesn't sync) which only does the layout change. Does this make sense? How would that barrier look like? Or is the 'full' barrier unavoidable in this case?
The short version of my questions is simply: how do other people do attachment synchronisation in conjunction with implicit layout transitions?
Generally speaking, when Vulkan or similar low-level APIs offers you multiple tools that can achieve what you want, you should give preference to the most specific tool that can solve your problem (without having to radically re-architect your code or fundamentally impact your design).
In your case, you have 2 options: barriers or render pass mechanisms (subpass dependencies and layout transitions). Barriers work with anything; they don't care where the image comes from, was used for, or where it is going. Render pass mechanisms only work for stuff that happens in a render pass and primarily deal with images attached to render passes (implicit layout transitions only work on attachments).
Render pass mechanisms are more specific, so you should prefer to use those tools if they meet your needs.
This is also why, if you have two "separate" rendering operations that could be in the same render pass (if you're reading from an attachment in a way that can live within the limitations of input attachments), you should prefer to put them in the same render pass.
The short version of my questions is simply: how do other people do attachment synchronisation in conjunction with implicit layout transitions?
Render pass dependencies is what You are looking for. In case of two render passes You need to use the mentioned VK_SUBPASS_EXTERNAL value.
It is the application's responsibility to transition the attachment to the correct initial layout, which can be done with a barrier, obviously. Can this barrier be replaced to get a performance advantage?
However You perform layout transition, it doesn't matter. It is Your responsibility to transfer image's layout to the one specified as the initial layout. But I think the best way would be to once again use implicit layout transitions provided by render passes. If You are using them already, it should be possible to setup them in such a way so the first render pass transitions image to a layout which is the same as the initial layout of the second render pass, and the final layout of the second render pass is the same as the initial layout of the first render pass.

What's the use of lightmap?

For starter, I know what a lightmap is, what to be gained by using it, and how to implement it. What I don't get is, if you have dynamically moving objects, which won't be able to generate a lightmap, you still need a light source to project their shadows. So, what would be gained by lightmap if we still need a light for the ones that won't get any lightmapping (ie. dynamic objects)?
Thanks in advance.
If you don't use realtime shadows (it's an option, often on mobile), than you can have more or less 2 approach for dynamic objects:
Use lightmap data baked into probes to approximate per-vertex lighting (no need to have realtime light). It's an approximation but can work on some contexts.
Use real-time lights only on dynamic objects, so you'll improve the look of those without sacrifice any performance on static ones, which can use only baked lights
If you need dynamic shadows casted by dynamic objects on static baked ones, than you still can benefits from lightmaps for several reasons:
Even if additional lighting passes are necessary to project shadows on static lightmapped objects, probably not all objects will be affected by shadow, but only the ones relatively close to the dynamic shadow caster object. So you still can save a lot of GPU time.
Lightmaps (especially on forward render path), allow to produce complex light scenarios, that would be otherwise impossible to achieve in realtime. Dynamic objects, doesn't need to be affected by all baked lights, but eventually only from the more important ones. This way you can have a limited amount of drawcalls for a really good looking static environments, and a limited number of "important" lights affecting the dynamic objects
if you have dynamically moving objects, which won't be able to
generate a lightmap, you still need a light source to project their
shadows.
That's true. But:
you save computing shading of static lightmapped objects because light won't affect them
as already said before, projected shadow will be casted on a limited set of objects

OpenGL geometry performance

I have an application which renders many filled polygons with OpenGL, in 2D. Filling is done by tesselation but performance is not optimal. 1900 polygons made up of 122000 vertex (that is, about 64 vertex per polygon) are displayed in about 3 seconds.
Apparently, the CPU is not the bottleneck, as if I replace calls to gluTessVertex by calls to glColor - just to test where is the bottleneck, performance is doubled.
I have the same problem with loading many small textures.
Now, which are the options to improve the performance? Seems that most time is spend in the geometry subsystem. Rendering is fast enough.
I already have a worker thread which does the load (so tesselation, texture binding) in one context, and another thread which does the draw in another context. The two contexts share objects via wglShareLists and it works like a charm.
Can I have a third thread in a third context which would handle also tesselation for half of the polygons? Anyone tried that? Is it safe? Any example of sharing objects between three contexts?
Forgot to say, I have an ATI Radeon HD 4550 graphics card, suppose it can handle more than 39kB/s of data.
Increase Performance
Sounds like you're using the old fixed-function pipeline.
If you're unsure of what that is, well, the following functions are a part of the fixed-function pipeline.
glBegin()
glEnd()
glVertex*()
glTexCoord*()
glNormal*()
glColor*()
etc.
Those functions are old and render geometry immediately. That means that each time you call the above functions, that geometry gets send to the GPU. By doing that a lot of times, you can easy make the FPS go way under 60 just by rendering simple things.
Now you need to use buffers and to be more precise VAOs with/or VBOs (and IBOs).
VBO or Vertex Buffer Object, is a buffer which can store vertices which you then can render. This is much much faster and better to use than glBegin() and glEnd(). When you create a VBO you supply it with vertices and they only require to be send to the GPU once, that's basically why they are fast, because they already are in the GPU and only require a single draw call instead of multiple.
The reason I said "with/or" is because in the newer versions you need to create a VAO which then would use a VBO, where before you could simply render the VBOs.
Tessellation
There are multiple ways to do tessellation and things which look like/would give the effect of tessellation.
For instance you could also simply render different models according to the required LOD (Level of Detail), thereby when you're up close to an object you then render the model with all it's details which probably would have a high vertices count. Then the further you're away from the model you simply render another version of that model but which have less vertices, which also equals less detail. Though if you can't really do that on something like terrain and definitely shouldn't do it on something like dynamic terrain and/or procedurally generated terrain.
You can also do actual geometry tessellation and you would do that through a Shader. Since tessellation is a really huge topic I will provide you with 2 urls which both explain and have code on them.
Both of these articles uses modern 4.0/4.0+ OpenGL.
http://prideout.net/blog/?p=48
http://antongerdelan.net/opengl/tessellation.html
Texturing
Generating and binding textures are still the same.
Instead of using gluBuild2DMipmaps() you can use glGenerateMipmap(GL_TEXTURE_2D); it was added in OpenGL version 3.0'ish if I remember correctly.
Again you can (and should) change all you glBegin() - glEnd() (and everything in between) calls out with VAOs and VBOs. You can store everything you want inside a buffer vertices, texture coordinates, normals, colors, etc. You can store the things in separate buffers or you can store them inside a single buffer, usually called an Interleaved Buffer or Interleaved VBO.
You wouldn't be needing glEnable(GL_TEXTURE_2D) and glDisable(GL_TEXTURE_2D) anymore, because you do that within a Shader, you bind textures and use them in a Shader, and since you create the Shader Program you can make it act however you want it to.

OpenGL ES 2.0: Efficient Rendering of Static and Dynamic Vertex Data

I am writing an iOS/Android game and looking for the most performant way to render my vertex data with OpenGL ES 2.0. I have two different kinds of data: dynamic data that changes its attributes every frame, for example the player or animated background objects, and static data such as the static background or the terrain. I googled a lot since yesterday, but I could not find a clear and unique answer to the question of what is the best was to render such data.
There are basically three options for rendering such data (If I do not miss one. If so, feel free to correct me.):
Vertex Arrays Only:
Just fill your vertex every frame on the CPU (including the dynamic data).
Vertex Buffer Objects Only:
Allocate a VBO on the GPU with GL_DYNAMIC_DRAW where both, the dynamic and static data is stored. The dynamic data is then updated every frame via glBufferSubData.
Use both:
Static data is stored and render with a VBO and the dynamic data is rendered with a Vertex Array. With this option, we need two rendering passes, one for rendering the VBO and one for rendering the vertex array.
Since the first option does not exploit the immutability of the static data and since the third option requires two rendering passes, my guess is that I should go with the second option. However, I am absolutely not sure about this and I hope you can clarify my confusion.
Allocate two Vertex Buffer Objects. One with hint GL_DYNAMIC_DRAW that will be updated frequently. Allocate a second VBO for immutable data and use the hint GL_STATIC_DRAW. According to the API documentation, GL_STATIC_DRAW should be used for data that "will be modified once and used many times"; just what you need.
Speaking of two rendering passes here is probably a misuse of the term: what you do is to render your scene in two separate drawing commands. Since drawing commands run asynchronously, you should not expericence any performance hit by doing so.
A second rendering pass, on the other hand, is when you render the entire scene twice (see for example here) with different settings, or when you do some image processing effects on outputs of previous rendering passes.

Resources