SVG images are great for high detailed graphics, but since they consist of a number of coordinates that need to be calculated before rendering, are they potentially bad for performance, say compared to rendering a jpg which is simply drawing an array of pre-calculated pixels?
I use Context.drawImage, and I do not know if the SVG graphics need to be calculated every drawn frame of the canvas or if they are perhaps cached somehow? or maybe I'm worrying about nothing?
The performance will depend on your specific application and the complexity of your graphic, but generally speaking the vector graphics are not going to have a significant impact. Your main bottleneck will typically be in manipulating the pixel data in the canvas; the larger your canvas, the more time it will take to draw.
Unless you are redrawing the canvas every frame however, the only calculations that are performed at all are those made when you initially draw the image. When you are not modifying it, the canvas is effectively nothing more than a static bitmap.
Related
I'm trying to learn SDL2. The main difference (as I can see) between the old SDL and SDL2 is that old SDL had window represented by it's surface, all pictures were surfaces and all image operations and blits were surface to surface. In SDL2 we have surfaces and textures. If I got it right, surfaces are in RAM, and textures are in graphics memory. Is that right?
My goal is to make object-oriented wrapper for SDL2 because I had a similar thing for SDL. I want to have class window and class picture (has private texture and surface). Window will have it's contents represented by an instance of the picture class, and all blits will be picture to picture object blits. How to organize these picture operations:
Pixel manipulation should be on surface level?
If I want to copy part of one picture to another without rendering it, it should be on surface level?
Should I blit surface to texture only when I want to render it on the screen?
Is it better to render it all to one surface and then render it to the window texture or to render each picture to the window texture separately?
Generally, when should I use surface and when should I use texture?
Thank you for your time and all help and suggestions are welcome :)
First I need to clarify some misconceptions: The texture based rendering does not work as the old surface rendering did. While you can use SDL_Surfaces as source or destination, SDL_Textures are meant to be used as source for rendering and the complimentary SDL_Renderer is used as destination. Generally you will have to choose between the old rendering framework that is done entirely on CPU and the new one that goes for GPU, but mixing is possible.
So for you questions:
Textures do not provide direct access to pixels, so it is better to do on surfaces.
Depends. It does not hurt to copy on textures if it is not very often and you want to render it accelerated later.
When talking about textures you will always render to SDL_Renderer, and is always better to pre-load surfaces on textures.
As I explained in first paragraph, there is no window texture. You can either use entirely surface based rendering or entirely texture based rendering. If you really need both (if you want to have direct pixel access and accelerated rendering) is better do as you said: blit everything to one surface and then upload to a texture.
Lastly you should use textures whenever you can. The surface use is a exception, use it when you either have to use intensive pixel manipulation or have to deal with legacy code.
I'm researching the the possibility of performing occlusion culling in voxel/cube-based games like Minecraft and I've come across a challenging sub-problem. I'll give the 2D version of it.
I have a bitmap, which infrequently has pixels get either added to or removed from it.
Image Link
What I want to do is maintain some arbitrarily small set of geometry primitives that cover an arbitrarily large area, such that the area covered by all the primitives is within the colored part of the bitmap.
Image Link
Is there a smart way to maintain these sets? Please not that this is different from typical image tracing in that the primitives can not go outside the lines. If it helps, I already have the bitmap organized into a quadtree.
I want a texture to contain each frame of a sprite's animation. Let's say each frame was 128x128 pixels, and there was 4 frames. Then it could easily fit into one 256x256 texture. If I have for instance 25 frames, then it'd have to fit into one 640x640 texture (128*5=640). However I read that texture dimensions should be powers of 2 for the best results, forcing the dimensions to be 1024x1024, which is much larger than the original size. In this case would it be better to have each frame loaded into respective textures of dimensions 128x128?
Each time you change the texture you suffer a hit in performance. As such it would be better to use one large texture especially if you have multiple of the same sprites that could be in different frames of the animation.
Some hardware will not support non-power-of-2 (NPOT) textures but these are becoming fewer and further between these days. Its probably best to keep to the power-of-2 (POT) texture limitation. Have you checked to see if you can get multiple different sprites and their animations into 1 large texture? The more frame of sprites you can bang into a single texture the fewer times you need to change the texture and hence the faster things will run ...
Pardon me if my lingo is not correct as I'm new to game programming. I've been looking at some open source projects and noticed that some sprites are split up into several files, all of which are grouped together to make a 2d object look like it's animating. That's straight forward. Then I'll see a different approach, with the 2d object all in one png file or something similar, all next to each other.
Is there an advantage of using one approach to another? Should sprites be in separate files? Why are they sometimes all on one sheet?
The former approach is typically more straightforward and easy to program, so you see a lot of it in open source projects.
The second approach is more efficient on modern graphics hardware, because it allows you to draw multiple different sprites from one large texture by specifying different u,v coordinates to select each individual sprite from the composite sheet. Because u,v coordinates can be streamed along with vertex data to a shader, this allows you to draw a large group of sprites much more efficiently than you could if you had to switch textures (which means changing shader state) for each poly. That means you can draw more sprites per millisecond, and thus get more on screen.
Every time you switch your currently bound texture you incur a penalty (sometimes a very big one if the system runs out of memory and starts paging textures in and out). So the more things you can draw with one texture the better. Going to extremes, if you never switched texture bindings, you'd incur 0 penalty.
On the other hand, video cards limit the maximum size of a texture, so you can only group smaller textures into a big one so much. The older the card the smaller the texture size you can use. So if you want to make your game work on a large variety of cards, you have to limit your textures to a more normal size (or have different sets of textures for different cards).
Another problem is that sometimes the stuff in your virtual world just doesn't pertain itself to being grouped like this. While you can have a big texture with every little decoration for your UI (window frames, buttons, etc), you're gonna have a harder time to use a single texture for different enemies because they might not even appear on the screen at the same time, or you might be unable to draw them one after the other because of the back-to-front drawing scheme necessary for transparency.
Not so long ago one reason to use packed sprites instead of seperate ones was that graphics hardware was limited to power-of-two textures (256, 512, 1024, ...). So you would waste a good amount of memory by not packing the sprites as you would have to enlarge everything to power-of-two dimensions before you could upload it. Packing multiple sprites into a single texture worked around that.
Another reason is that its much quicker to load one big image file from the HD then it is to load hundreds of small ones. This is still the case as file access comes with quite a large overhead per file, so the less files you have the faster things become. And especially with small sprites you can easily turn hundred files into a single one, so the saving can be quite noticable.
There are however also reason against having everything in one texture. For one OpenGL is no longer limited to power-of-two textures, so any size will work. But more importantly, packing everything in one texture has negative side effects. When you for example have lots of scaling in a game you have to be careful about the borders of your sprites, as colors will blend into neighboring sprites giving you ugly artifacts. You can avoid that to a certain degree by adding extra space around your sprites, but its not a perfect solution. Having everything in one texture also limits what you can do with the image. For certain effects, such as a waterfall for example, you might want to do the animation by simply offsetting the UV coordinates of the texture, you can't do that so easily when everything is packed into a single texture.
Does anyone know of a graphics system which handles composition of multiple anti-aliased lines well?
I'm showing a dependency diagram and have a bunch of curves emanating from a point. These are drawn anti-aliased in the usual way, of blending partially covered pixels. So if two lines would occupy the same half of a pixel, the antialiasing blends it to 75% filled rather than 50% filled. With enough lines drawn on top of each other, the pixel blend clamps and you end up with aliased lines.
I know anti-grain geometry has algorithms for calculating blends which cater for lines which abut, and that oversampling might work, but are there any other approaches?
Handling this form of line composition well is going to be slow (you have to consider all the lines that impinge upon each pixel using a deferred rendering approach). I doubt that there are many (if any) libraries out there that will do it for you.
The quickest and easiest method (and possibly the only realistic and cost effective solution for your case), which will work with virtually any drawing library would be to supersample it - draw to an offscreen bitmap at much higher resolution (e.g. 4 times wider and higher, with lines of 4 pixels width. Disable antialiasing when drawing this as it'll only slow it down) and then scale the result down with bilinear filtering. The main down-side is that it uses a lot of memory for the offscreen bitmap.
If you need an existing system that gets antialiased lines "visually correct", you might try using one of several existing RenderMan-compliant 3D renderers. The REYES algorithm, which many of these renderers use, works by breaking up primitives into micropolygons, then sampling them at several random point locations within each pixel. So even if you have a million lines collectively obscuring 50% of a pixel, the resulting image value will show roughly 50% coverage. (This is, for example, how the millions of antialiased hairs are drawn on characters in many animated movies.)
Of course, using a full-blown 3D renderer to draw 2D lines is like driving nails with a sledgehammer. You'd need a fairly pathological scenario for the 3D renderer to be any more efficient than simply supersampling with a traditional 2D renderer.
It sounds like you want a premade drawing library, which I do not know of.
However, to answer your question of knowing any approach that would work, you can consider a pixel to be a square. You can then approximate any shape that you draw as a polygon that intersects the pixel box. By clipping these polygons against the box of the pixel and against each other, you can get a very good estimate of the areas associated with each color that intersects the pixel for accurate antialiasing. This is, of course, very slow to calculate and is not suitable for interactive drawing.