As relates to texture atlases, what is a "quad"? - graphics

It is my understanding that a texture atlas is basically a single texture that contains many smaller textures and that they are useful for making games or animations faster because they allow you to access many animation frames by loading a single file rather than files for each and every frame.
So, in discussions of texture atlases, I see the term "quad" mentioned everywhere - Is a quad simply the x, y, width and height of an individual texture from a texture atlas or am I missing something?

Quadrilateral - not necessarily a rectangle.

Related

What is a deep frame buffer?

In a real-time graphics application, I believe a frame buffer is the memory that holds the final rasterised image that will be displayed for a single frame.
References to deep frame buffers seem to imply there's some caching going on (vertex and material info), but it's not clear what this data is used for, or how.
What specifically is a deep frame buffer in relation to a standard frame buffer, and what are its uses?
Thank you.
Google is your friend.
It can mean two things:
You're storing more than just RGBA per pixel. For example, you might be storing normals or other lighting information so you can do re-lighting later.
Interactive Cinematic Relighting with Global Illumination
Deep Image Compositing
You're storing more than one color and depth value per pixel. This is useful, for example, to support order-independent transparency.
A z buffer is similar to a color buffer which is usually used to store the "image" of a 3D scene, but instead of storing color information (in the form a 2D array of rgb pixels), it stores the distance from the camera to the object visible through each pixel of the framebuffer.
Traditionally, z-buffer only sore the distance from the camera to the nearest object in the 3D for any given pixel in the frame. The good thing about this technique is that if 2 images have been rendered with their z-buffer, then they can be re-composed using a 2D program for instance, but pixels from the image A which are in "front" of the pixels from image "B", will be composed on top of the re-composed image. To decide whether these pixels are in front, we can use the information stored in the images' respective z-buffer. For example, imagine we want to compose pixels from image A and B at pixel coordinates (100, 100). If the distance (z value) stored in the z-buffer at coordinates (100, 100) is 9.13 for image A and 5.64 for image B, the in the recomposed image C, at pixel coordinates (100, 100) we shall put the pixel from the image B (because it corresponds to a surface in the 3D scene which is in front of the object which is visible through that pixel in image A).
Now this works great when objects are opaque but not when they are transparent. So when objects are transparent (such as when we render volumes, clouds, or layers of transparent surfaces) we need to store more than one z value. Also note, that "opacity" changes as the density of the volumetric object or the number of transparent layers increase. Anyway, just to say that a deep image or deep buffer is technically just like a z-buffer but rather than storing only one depth or z values it stores not only more than one depth value but also stores the opacity of the object at each one of these depth value.
Once we have stored this information, it is possible in post-production to properly (that is accurately) recompose 2 or more images together with transparencies. For instance if you render 2 clouds and that these clouds overlap in depth, then their visibility will be properly recomposed as if they had been rendered together in the same scene.
Why would we use such technique at all? Often because rendering scenes containing volumetric elements is generally slow. Thus it's good to render them seprately from other objects in the scene, so that if you need to make tweaks to the solid objects you do not need to re-render the volumetrics elements again.
This technique was mostly made popular by Pixar, in the renderer they develop and sell (Prman). Avatar (Weta Digital in NZ) was one of the first film to make heavy use of deep compositing.
See: http://renderman.pixar.com/resources/current/rps/deepCompositing.html
The cons of this technique: deep images are very heavy. It requires to store many depth values per pixels (and these values are stored as floats). It's not uncomon for such images to be larger than a few hundred to a a couple of gigabytes depending on the image resolution and scene depth complexity. Also you can recompose volume object properly but they won't cast shadow on each other which you would get if you were rendering objects together in the same scene. This make scene management slightly more complex that usual, ... but this is generally dealt with properly.
A lot of this information can be found on scratchapixel.com (for future reference).

How do I apply different textures to multiple primitives? (Direct3D 9)

I am creating a game in which every primitive needs its own texture, but I can't seem to figure out how. I searched through Google but it only displays results about texture blending. Can you please tell me how to apply multiple textures on multiple non-indexed primitives? Or do they have to be indexed?
You can change textures by calling SetTexture before each of DrawPrimitives.
I think using an UV atlas can solve your problem. An atlas is basically a large texture made up of smaller textures, like a photo collage. The UV coordinates of your vertices of course refer to the large texture, but if you know the position of your "small" textures, this is easy to calculate.
Of course you have to create that atlas texture first.

Performance of rendering SVG images on HTML 5 canvas

SVG images are great for high detailed graphics, but since they consist of a number of coordinates that need to be calculated before rendering, are they potentially bad for performance, say compared to rendering a jpg which is simply drawing an array of pre-calculated pixels?
I use Context.drawImage, and I do not know if the SVG graphics need to be calculated every drawn frame of the canvas or if they are perhaps cached somehow? or maybe I'm worrying about nothing?
The performance will depend on your specific application and the complexity of your graphic, but generally speaking the vector graphics are not going to have a significant impact. Your main bottleneck will typically be in manipulating the pixel data in the canvas; the larger your canvas, the more time it will take to draw.
Unless you are redrawing the canvas every frame however, the only calculations that are performed at all are those made when you initially draw the image. When you are not modifying it, the canvas is effectively nothing more than a static bitmap.

Power of 2 Textures with Sprite Animation

I want a texture to contain each frame of a sprite's animation. Let's say each frame was 128x128 pixels, and there was 4 frames. Then it could easily fit into one 256x256 texture. If I have for instance 25 frames, then it'd have to fit into one 640x640 texture (128*5=640). However I read that texture dimensions should be powers of 2 for the best results, forcing the dimensions to be 1024x1024, which is much larger than the original size. In this case would it be better to have each frame loaded into respective textures of dimensions 128x128?
Each time you change the texture you suffer a hit in performance. As such it would be better to use one large texture especially if you have multiple of the same sprites that could be in different frames of the animation.
Some hardware will not support non-power-of-2 (NPOT) textures but these are becoming fewer and further between these days. Its probably best to keep to the power-of-2 (POT) texture limitation. Have you checked to see if you can get multiple different sprites and their animations into 1 large texture? The more frame of sprites you can bang into a single texture the fewer times you need to change the texture and hence the faster things will run ...

Sprites in game programming, multiple files vs one "texture"?

Pardon me if my lingo is not correct as I'm new to game programming. I've been looking at some open source projects and noticed that some sprites are split up into several files, all of which are grouped together to make a 2d object look like it's animating. That's straight forward. Then I'll see a different approach, with the 2d object all in one png file or something similar, all next to each other.
Is there an advantage of using one approach to another? Should sprites be in separate files? Why are they sometimes all on one sheet?
The former approach is typically more straightforward and easy to program, so you see a lot of it in open source projects.
The second approach is more efficient on modern graphics hardware, because it allows you to draw multiple different sprites from one large texture by specifying different u,v coordinates to select each individual sprite from the composite sheet. Because u,v coordinates can be streamed along with vertex data to a shader, this allows you to draw a large group of sprites much more efficiently than you could if you had to switch textures (which means changing shader state) for each poly. That means you can draw more sprites per millisecond, and thus get more on screen.
Every time you switch your currently bound texture you incur a penalty (sometimes a very big one if the system runs out of memory and starts paging textures in and out). So the more things you can draw with one texture the better. Going to extremes, if you never switched texture bindings, you'd incur 0 penalty.
On the other hand, video cards limit the maximum size of a texture, so you can only group smaller textures into a big one so much. The older the card the smaller the texture size you can use. So if you want to make your game work on a large variety of cards, you have to limit your textures to a more normal size (or have different sets of textures for different cards).
Another problem is that sometimes the stuff in your virtual world just doesn't pertain itself to being grouped like this. While you can have a big texture with every little decoration for your UI (window frames, buttons, etc), you're gonna have a harder time to use a single texture for different enemies because they might not even appear on the screen at the same time, or you might be unable to draw them one after the other because of the back-to-front drawing scheme necessary for transparency.
Not so long ago one reason to use packed sprites instead of seperate ones was that graphics hardware was limited to power-of-two textures (256, 512, 1024, ...). So you would waste a good amount of memory by not packing the sprites as you would have to enlarge everything to power-of-two dimensions before you could upload it. Packing multiple sprites into a single texture worked around that.
Another reason is that its much quicker to load one big image file from the HD then it is to load hundreds of small ones. This is still the case as file access comes with quite a large overhead per file, so the less files you have the faster things become. And especially with small sprites you can easily turn hundred files into a single one, so the saving can be quite noticable.
There are however also reason against having everything in one texture. For one OpenGL is no longer limited to power-of-two textures, so any size will work. But more importantly, packing everything in one texture has negative side effects. When you for example have lots of scaling in a game you have to be careful about the borders of your sprites, as colors will blend into neighboring sprites giving you ugly artifacts. You can avoid that to a certain degree by adding extra space around your sprites, but its not a perfect solution. Having everything in one texture also limits what you can do with the image. For certain effects, such as a waterfall for example, you might want to do the animation by simply offsetting the UV coordinates of the texture, you can't do that so easily when everything is packed into a single texture.

Resources