SDL2 / Surface / Texture / Render - graphics

I'm trying to learn SDL2. The main difference (as I can see) between the old SDL and SDL2 is that old SDL had window represented by it's surface, all pictures were surfaces and all image operations and blits were surface to surface. In SDL2 we have surfaces and textures. If I got it right, surfaces are in RAM, and textures are in graphics memory. Is that right?
My goal is to make object-oriented wrapper for SDL2 because I had a similar thing for SDL. I want to have class window and class picture (has private texture and surface). Window will have it's contents represented by an instance of the picture class, and all blits will be picture to picture object blits. How to organize these picture operations:
Pixel manipulation should be on surface level?
If I want to copy part of one picture to another without rendering it, it should be on surface level?
Should I blit surface to texture only when I want to render it on the screen?
Is it better to render it all to one surface and then render it to the window texture or to render each picture to the window texture separately?
Generally, when should I use surface and when should I use texture?
Thank you for your time and all help and suggestions are welcome :)

First I need to clarify some misconceptions: The texture based rendering does not work as the old surface rendering did. While you can use SDL_Surfaces as source or destination, SDL_Textures are meant to be used as source for rendering and the complimentary SDL_Renderer is used as destination. Generally you will have to choose between the old rendering framework that is done entirely on CPU and the new one that goes for GPU, but mixing is possible.
So for you questions:
Textures do not provide direct access to pixels, so it is better to do on surfaces.
Depends. It does not hurt to copy on textures if it is not very often and you want to render it accelerated later.
When talking about textures you will always render to SDL_Renderer, and is always better to pre-load surfaces on textures.
As I explained in first paragraph, there is no window texture. You can either use entirely surface based rendering or entirely texture based rendering. If you really need both (if you want to have direct pixel access and accelerated rendering) is better do as you said: blit everything to one surface and then upload to a texture.
Lastly you should use textures whenever you can. The surface use is a exception, use it when you either have to use intensive pixel manipulation or have to deal with legacy code.

Related

How does Skia or Direct2D render lines or polygons with GPU?

This is a question to understand the principles of GPU accelerated rendering of 2d vector graphics.
With Skia or Direct2D, you can draw e.g. rounded rectangles, Bezier curves, polygons, and also have some effects like blur.
Skia / Direct2D offer CPU and GPU based rendering.
For the CPU rendering, I can imagine more or less how e.g. a rounded rectangle is rendered. I have already seen a lot of different line rendering algorithms.
But for GPU, I don't have much of a clue.
Are rounded rectangles composed of triangles?
Are rounded rectangles drawn entirely by wild pixel shaders?
Are there some basic examples which could show me the basic prinicples of how such things work?
(Probably, the solution could also be found in the source code of Skia, but I fear that it would be so complex / generic that a noob like me would not understand anything.)
In case of direct2d, there is no source code, but since it uses d3d10/11 under the hood, it's easy enough to see what it does behind the scenes with Renderdoc.
Basically d2d tends to have a policy to minimize draw calls by trying to fit any geometry type into a single buffer, versus skia which has some dedicated shader sets depending on the shape type.
So for example, if you draw a bezier path, Skia will try to use tesselation shader if possible (which will need a new draw call if the previous element you were rendering was a rectangle), since you change pipeline state.
D2D, on the other side, tends to tesselate on the cpu, and push to some vertexbuffer, and switches draw call only if you change brush type (if you change from one solid color brush to another it can keep the same shaders, so it doesn't switch), or when the buffer is full, or if you switch from shape to text (since it then needs to send texture atlases).
Please note that when tessellating bezier path D2D does a very great work at making the resulting geometry non self intersecting (so alpha blending works properly even on some complex self intersecting path).
In case on rounded rectangle, it does the same, just tessellates into triangles.
This allows it to minimize draw calls to a good extent, as well as allowing anti alias on a non msaa surface (this is done at mesh level, with some small triangles with alpha). The downside of it is that it doesn't use much hardware feature, and geometry emitted can be quite high, even for seemingly simple shapes).
Since d2d prefers to use triangle strips instead or triangle list, it can do some really funny things when drawing a simple list of triangles.
For text, d2d use instancing and draws one instanced quad per character, it is also good at batching those, so if you call some draw text functions several times in a row, it will try to merge this into a single call as well.

Draw an image in -drawrect or load a same image from a file, which is more efficient?

Say I want to display a coordinate graph in UIView which is going to be updated in time. Assume I am implementing -drawrect of UIView and other methods to update the graph.
Since the the coordinate frame in the graph stays the same over time, would it be more efficient to have two UIViews, with one (view1) loading the coordinate frame from an image file as its view thus no need to draw the frame every time the graph updates and the other (view2) having -drawrect implemented and being added as a subview of view1, than have only one UIView where the entire graph is drawn in -drawrect?
Above is just a specific example. What I am wondering is if it's a design pattern to split static UI elements as much as possible from dynamic ones as far as -drawrect is involved and if it does substantially save CPU (or GPU) resource in doing so.
Any insight would be appreciated. Thanks.
My experience is that it is often best to have an offscreen buffer where you draw to, and then copy some or all of the offscreen buffer to the screen. There are a number of ways you can do this, and it depends on the type of view you're using. For example, if you have an NSOpenGLView, you might draw to a texture-backed FBO, and then use that texture to draw a textured quad to the screen. If you are working with CGImages for the static part, you might draw to a CGBitmapContext in main memory, and then turn that bitmap context into a CGImage to draw to the view.
How you draw to your offscreen buffer will also depend on what you're drawing. For something like a graph, you might draw the background once to your offscreen buffer and then add points to a curve you're drawing over it as time progresses, and just copy the portion you need to screen. For something like a game where all the characters and objects in the scene are moving, you might need to draw everything in the scene on every frame.

2d tile based game design, how do I draw a map with a viewport?

I've been struggling with this for a while.
Presently, I have a grid of 100 by 100 tiles, which belong to a Map.
The Map implements IDrawable. I call Draw() and it draws itself at 0,0 which is fine.
However, I want to expand this to draw essentially a viewport. The player will be drawn on the screen in the middle, and thus I want to display say, 10 tiles in each direction (rather than the entire map).
I'm having trouble thinking up the architecture for this one. I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself. This would have worked before, where it drew the player at x,y on the screen, but with a viewport it will no longer know where to draw itself.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible? Should the map tiles be objects that are subjected to this? Or should the viewport intelligently draw the map by coupling both together?
I'd love to know how typical scrolling tile games accomplish this.
If it matters, I'm using XNA
Edit to add: Can you do graphics manipulation such as trying the HTML rendering approach, where you tell things to draw, and they return a graphic of themselves, and then the parent places the graphic in the correct location? I'm thinking, if I had 2 viewports side by side for splitscreen, how would I stop them drawing outside the edges?
Possible design:
There's a 2D "world" that contains object instances.
"Object instance" is a sprite reference + its coordinates in the world.
When you draw scene, you request list of visible objects that exist in given 2D area, THEN you draw them.
With such design world can be very huge.
I'm in the mindset that things should draw themselves, ie I say player1.Draw() and it draws itself.
visible things should draw themselves. Objects outside of viewport are not visible.
, how would I stop them drawing outside the edges?
Not sure about XNA, but OpenGL has "scissors test"/"glViewport" and Direct3D 9 has "SetViewport" method that allows you to use part of the screen/window for rendering. There are also clipplanes and stencil buffer (using stencil for 2D clipping is overkill, though) You could also render to texture then render the texture. There are many ways to deal with this.
So should the viewport be told to draw, and examine every object in the game and draw those which are visible?
For a large world, you shouldn't examine every object, because it will be slow. You should be able to find visible object without testing every one of them. For that you'll need some kind of space partitioning - quad trees (because we are in 2D), k-d trees, etc. This way you should be able to handle few thousands (or even hundreds of thousands) of objects, as long as you don't see them all at once.
Should the map tiles be objects that are subjected to this?
If you keep drawing invisible things, FPS will drop.
and they return a graphic of themselves
For 2D game this may be very slow. Remember KISS principle.
Some basic ideas, not specifically for XNA:
objects draw themselves to a "virtual screen" in world coordinates, they don't draw themselves to the screen directly
drawable objects get a "graphics context" object which offers you a drawing API. The "graphics context" knows about the current viewport bounds and realizes the coordinate transformation from world coordinates to screen coordinates (for every drawing operations). The graphics context also does the direct drawing to the screen (or to a background screen buffer, if you need double buffering).
when you have many objects outside the visible bounds of your viewport, then as a performance optimization, your drawing loop can make a before-hand bounds-check for your objects and test if they are completely outside the visible area. If so, there is no need to let them draw themselves.

How to get a pixel color in J2ME?

I have drawn a canvas and i want to know how to get a pixel color of the canvas ?
Create a mutable Image the same size as your Canvas. Then, any operations you perform on your Canvas's Graphics object, perform the same ones on your Image's Graphics object.
Finally, get the pixel data from the Image using getRGB(); it should be the same as the Canvas.
Unfortunately, you can't. Graphics class, which is used to draw to Canvas, is for painitng only, it can't give you any information on the pixels.
If you are targeting a platform that supports the NokiaUI API you can use the DirectGraphics#getPixels to read pixel data. On mobile platforms with graphics accelerator hardware, reading pixels tend to be slower so you should use this very sparingly.

Suggestion for graphics library for 2D game (PC)

I'm trying to set base to a 2D game with destructible terrain and/or particle effects, scroll, zoom, characters, etc... I'd like to know if there is a graphics library that would support those things in both software and hardware acceleration (need pixel access). I've tried SDL (even with DirectX back-end), but it seems hardware does its job only in full screen. I'd appreciate any suggestion.
Use OpenGL. Perhaps via another library e.g. SDL. I do not know why you can't get windowed HW acceleration working, it might be a platform thing (but it's certainly a different question).
Set the projection matrix to orthographic and use one of the axis (typically z) to organise 'stacking' elements. With an appropriate transformation in the display subroutine, you can align the x/y coordinates with "traditional" drawing (i.e., top-left down, rather than bottom-left up).
Build your graphical elements into bitmaps, convert them into textures and draw them on top of OpenGL Rects.

Resources