Sprites in game programming, multiple files vs one "texture"? - sprite

Pardon me if my lingo is not correct as I'm new to game programming. I've been looking at some open source projects and noticed that some sprites are split up into several files, all of which are grouped together to make a 2d object look like it's animating. That's straight forward. Then I'll see a different approach, with the 2d object all in one png file or something similar, all next to each other.
Is there an advantage of using one approach to another? Should sprites be in separate files? Why are they sometimes all on one sheet?

The former approach is typically more straightforward and easy to program, so you see a lot of it in open source projects.
The second approach is more efficient on modern graphics hardware, because it allows you to draw multiple different sprites from one large texture by specifying different u,v coordinates to select each individual sprite from the composite sheet. Because u,v coordinates can be streamed along with vertex data to a shader, this allows you to draw a large group of sprites much more efficiently than you could if you had to switch textures (which means changing shader state) for each poly. That means you can draw more sprites per millisecond, and thus get more on screen.

Every time you switch your currently bound texture you incur a penalty (sometimes a very big one if the system runs out of memory and starts paging textures in and out). So the more things you can draw with one texture the better. Going to extremes, if you never switched texture bindings, you'd incur 0 penalty.
On the other hand, video cards limit the maximum size of a texture, so you can only group smaller textures into a big one so much. The older the card the smaller the texture size you can use. So if you want to make your game work on a large variety of cards, you have to limit your textures to a more normal size (or have different sets of textures for different cards).
Another problem is that sometimes the stuff in your virtual world just doesn't pertain itself to being grouped like this. While you can have a big texture with every little decoration for your UI (window frames, buttons, etc), you're gonna have a harder time to use a single texture for different enemies because they might not even appear on the screen at the same time, or you might be unable to draw them one after the other because of the back-to-front drawing scheme necessary for transparency.

Not so long ago one reason to use packed sprites instead of seperate ones was that graphics hardware was limited to power-of-two textures (256, 512, 1024, ...). So you would waste a good amount of memory by not packing the sprites as you would have to enlarge everything to power-of-two dimensions before you could upload it. Packing multiple sprites into a single texture worked around that.
Another reason is that its much quicker to load one big image file from the HD then it is to load hundreds of small ones. This is still the case as file access comes with quite a large overhead per file, so the less files you have the faster things become. And especially with small sprites you can easily turn hundred files into a single one, so the saving can be quite noticable.
There are however also reason against having everything in one texture. For one OpenGL is no longer limited to power-of-two textures, so any size will work. But more importantly, packing everything in one texture has negative side effects. When you for example have lots of scaling in a game you have to be careful about the borders of your sprites, as colors will blend into neighboring sprites giving you ugly artifacts. You can avoid that to a certain degree by adding extra space around your sprites, but its not a perfect solution. Having everything in one texture also limits what you can do with the image. For certain effects, such as a waterfall for example, you might want to do the animation by simply offsetting the UV coordinates of the texture, you can't do that so easily when everything is packed into a single texture.

Related

What is the fastest engine for drawing large numbers of semitransparent trianges?

I enjoy computer graphics.
I was wondering what the fastest engine was with the following functionality:
Draws triangles with 4 color channels rgba and allows for the drawing of point and directional lights.
Texturing would be a cool additional feature, but again I am looking for the fastest engine, not the most functional. Camera animation and object animation will be imperative.
Finally there are really 2 answers for this question, 1 for general development and one for web, but if you can only speak to one or the other your contributions will be appreciated!
There are quite a lot of engines that do the job. One of the most known is for example Unity, where you also have tons of other features in good performance.
But I think you are not really looking for an engine but an API. Examples are OpenGL or DirectX (already mentioned). OpenGL even has a specific web content (WebGL).
There is one more problem: the triangles should be semitransparent. What is missing in the other answer is the question if the triangles are already ordered. OpenGL for example is good in rendering objects where it does not matter which triangle is nearest to the viewer. It "searches" this one on the fly and shows only the triangle that is visible. But with semitransparent triangles it is possible to see different triangles overlapping each other and therefore it is not only necessary to know which triangle is in the front, but which triangle comes directly after that and so on. OpenGL offers blending for this feature, but is necessary to order the semitransparent triangles manually before rendering. This is called the Painters Algorithm. While Sorting of objects is a complex problem, exspecially with a large number of objects, this could take quite long time.
For this there is another solution called "depth peeling". The idea is to render all triangles multiple times with OpenGL. The first time you get all the triangles which are in the front. Now you render all triangles again, but without the triangles in the front. This results in the second nearest triangles to the viewer. After that all triangles are rendered again, but without the first two "peels", which results in the third nearest triangles and so on. This is expensive because everything has to get rendered multiple times, but in cases where there is a very large number of triangles this is faster than sorting (and more precise due to overlapping triangles). In most cases four peels are enough for good results. For further read I suggest the following paper of Everitt: http://gamedevs.org/uploads/interactive-order-independent-transparency.pdf
Your best bet is probably OpenGL. In the case of the web, you could use WebGL and in the case of native desktop or mobile development you could directly use OpenGL.

Conservatively cover bitmap with small number of primitives?

I'm researching the the possibility of performing occlusion culling in voxel/cube-based games like Minecraft and I've come across a challenging sub-problem. I'll give the 2D version of it.
I have a bitmap, which infrequently has pixels get either added to or removed from it.
Image Link
What I want to do is maintain some arbitrarily small set of geometry primitives that cover an arbitrarily large area, such that the area covered by all the primitives is within the colored part of the bitmap.
Image Link
Is there a smart way to maintain these sets? Please not that this is different from typical image tracing in that the primitives can not go outside the lines. If it helps, I already have the bitmap organized into a quadtree.

Performance of rendering SVG images on HTML 5 canvas

SVG images are great for high detailed graphics, but since they consist of a number of coordinates that need to be calculated before rendering, are they potentially bad for performance, say compared to rendering a jpg which is simply drawing an array of pre-calculated pixels?
I use Context.drawImage, and I do not know if the SVG graphics need to be calculated every drawn frame of the canvas or if they are perhaps cached somehow? or maybe I'm worrying about nothing?
The performance will depend on your specific application and the complexity of your graphic, but generally speaking the vector graphics are not going to have a significant impact. Your main bottleneck will typically be in manipulating the pixel data in the canvas; the larger your canvas, the more time it will take to draw.
Unless you are redrawing the canvas every frame however, the only calculations that are performed at all are those made when you initially draw the image. When you are not modifying it, the canvas is effectively nothing more than a static bitmap.

Power of 2 Textures with Sprite Animation

I want a texture to contain each frame of a sprite's animation. Let's say each frame was 128x128 pixels, and there was 4 frames. Then it could easily fit into one 256x256 texture. If I have for instance 25 frames, then it'd have to fit into one 640x640 texture (128*5=640). However I read that texture dimensions should be powers of 2 for the best results, forcing the dimensions to be 1024x1024, which is much larger than the original size. In this case would it be better to have each frame loaded into respective textures of dimensions 128x128?
Each time you change the texture you suffer a hit in performance. As such it would be better to use one large texture especially if you have multiple of the same sprites that could be in different frames of the animation.
Some hardware will not support non-power-of-2 (NPOT) textures but these are becoming fewer and further between these days. Its probably best to keep to the power-of-2 (POT) texture limitation. Have you checked to see if you can get multiple different sprites and their animations into 1 large texture? The more frame of sprites you can bang into a single texture the fewer times you need to change the texture and hence the faster things will run ...

Antialiased composition by coverage?

Does anyone know of a graphics system which handles composition of multiple anti-aliased lines well?
I'm showing a dependency diagram and have a bunch of curves emanating from a point. These are drawn anti-aliased in the usual way, of blending partially covered pixels. So if two lines would occupy the same half of a pixel, the antialiasing blends it to 75% filled rather than 50% filled. With enough lines drawn on top of each other, the pixel blend clamps and you end up with aliased lines.
I know anti-grain geometry has algorithms for calculating blends which cater for lines which abut, and that oversampling might work, but are there any other approaches?
Handling this form of line composition well is going to be slow (you have to consider all the lines that impinge upon each pixel using a deferred rendering approach). I doubt that there are many (if any) libraries out there that will do it for you.
The quickest and easiest method (and possibly the only realistic and cost effective solution for your case), which will work with virtually any drawing library would be to supersample it - draw to an offscreen bitmap at much higher resolution (e.g. 4 times wider and higher, with lines of 4 pixels width. Disable antialiasing when drawing this as it'll only slow it down) and then scale the result down with bilinear filtering. The main down-side is that it uses a lot of memory for the offscreen bitmap.
If you need an existing system that gets antialiased lines "visually correct", you might try using one of several existing RenderMan-compliant 3D renderers. The REYES algorithm, which many of these renderers use, works by breaking up primitives into micropolygons, then sampling them at several random point locations within each pixel. So even if you have a million lines collectively obscuring 50% of a pixel, the resulting image value will show roughly 50% coverage. (This is, for example, how the millions of antialiased hairs are drawn on characters in many animated movies.)
Of course, using a full-blown 3D renderer to draw 2D lines is like driving nails with a sledgehammer. You'd need a fairly pathological scenario for the 3D renderer to be any more efficient than simply supersampling with a traditional 2D renderer.
It sounds like you want a premade drawing library, which I do not know of.
However, to answer your question of knowing any approach that would work, you can consider a pixel to be a square. You can then approximate any shape that you draw as a polygon that intersects the pixel box. By clipping these polygons against the box of the pixel and against each other, you can get a very good estimate of the areas associated with each color that intersects the pixel for accurate antialiasing. This is, of course, very slow to calculate and is not suitable for interactive drawing.

Resources