Is it guaranteed that SDL_RenderCopy will not be anti-aliased? - antialiasing

The SDL documentation for SDL_RenderCopy says that the texture will be stretched, but I can find absolutely no information about how the stretching will be done.
My experiments on Linux show that stretching is not anti-aliased. Can I rely on this? Will it behave the same way on other platforms? Will it not change in future versions?
Searching for anti-aliasing in SDL Wiki yields only 1 result, and it refers to the OpenGL configuration; the Wiki seems to be silent on how the stretching of textures is done. I’m writing a pixel-art game that won’t look good if anti-aliasing is applied, so I’d like to make sure it will never be anti-aliased.

Yes, it is guaranteed that by default SDL2 uses the nearest pixel stretching. You need to explicitly enable any other format by hinting it (explained at http://wiki.libsdl.org/SDL_HINT_RENDER_SCALE_QUALITY).
Good look at your game.

Related

Moving a graphic over a background image

I have a new project but I'm really not sure where to start, other than I think GNAT Ada would be good. I taught myself Ada three years ago and I have managed some static graphics with GNAT but this is different.
Please bear with me, all I need is a pointer or two towards where I might start, I'm not asking for a solution. My history is in back-end languages that are now mostly obsolete, so graphics is still a bit of a challenge.
So, the project:
With a static background image (a photograph) - and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line. Once in place, I need to report on the position of the cursor relative to the overall length of the line. I can probably handle the reporting with what I already know but I have no clue as to how to create a graphic that I can slide around over another image. In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
If I am wrong to choose GNAT Ada for this, please suggest an alternative.
I have looked in Stackoverflow for anything similar but have found answers only for Blackberry and Java, neither of which seems relevant.
For background, this will be a useful means of measuring relative lengths of the features of insect bodies from photographs, hopefully to set up some definitive identification guides for closely-related species.
With a static background image (a photograph)
So first you need a window to put your interface in. You can get this from any GUI framework (link as given by trashgod in the comments).
and a moveable line with an adjustable cursor somewhere between the ends of the line. I need to rotate the line and adjust the length, as well as to move it around the screen and slide the cursor along the line; I have no problem calculating the positions of each element of the line.
These are affine transformations. They are commonly employed in low-level graphics rendering. You can, like Zerte suggested, employ OpenGL – however modern OpenGL has a steep learning curve for beginners.
GtkAda includes a binding to the Cairo graphics library which supports such transformations, so you can create a window with GtkAda with a Cairo surface and then render your image & line on it. Cairo does have a learning curve and I never used the Ada binding, so I cannot really give an opinion about how complex this will be.
Another library that fully supports what you want to do is SDL which has Ada bindings here. The difference to GtkAda is that SDL is a pure graphics drawing library so you need to „draw“ any interactive controls yourself. On the other hand, setting up a window via SDL and drawing things will be somewhat simpler than doing it via GtkAda & Cairo.
SFML which has also been mentioned in comments is on the same level as SDL. I do not know it well enough to give a more informed opinion but what I said about SDL will most probably also apply to SFML.
In the past I have failed to detect mouse events in GNAT Ada and I am sure I will need to get on top of that - in fact if I could, I would probably manage to control the line but doing it over an existing image is beyond me.
HID event processing is handled by whatever GUI library you use. If you use SDL, you'll get mouse events from SDL; if you use GTK, you'll get mouse events from GTK, and so on.

Graphics get aliased when go fullscreen in HaxeFlixel?

The graphics in Flash and non-fullscreen is anti-aliased and really smooth. But when go fullscreen or on mobile, the graphics become aliased. Even when I use SVG image.
Yes Flash has some great aliasing with software cpu based rendering. In the mobile targets of HaxeFlixel there is a method for drawing that is very different, mostly for performance reasons.
In HaxeFlixel the mobile and cpp targets will use the gpu which is more like webgl or Flash's Stage3d. This means that there will be differences in the way things like edges of images and text look.
Flixel and OpenFL do a very good job in making these two methods as similar as possible. Some recent work on text for cpp in OpenFL has been very impressive. I am not aware of any solution that makes the two pixel perfect in a complex game engine for every use case. You will find similar differences with aliasing in flash game engines like Starling which also use the gpu.
Some things you can try:
For OpenFL/HaxeFlixel I have set gpu antialiasing before, this should be the default:
<window antialiasing="4" />
If you are wanting to test, you will loose performance however I believe you can still set software rendering in cpp with.
FlxG.camera.antialiasing = true;
You mention SVG, I think you are assuming since its a vector format it should render perfectly. The gpu rendering first rasterises the image to a bitmap so if you are expecting it to scale etc like it does in the browser it wont. You could in this case use a higher resolution image and scale it down first.

OpenGL 3.2 Core Sprite Batch Example?

I have been tearing my hair out for a while over this. I need an OpenGL 3.2 Core (no deprecated stuff!) way to efficiently render many sprites, using batching (no instancing).
I've seen examples that do this with geometry alone, but mine also needs to send textures to it, I don't know how to do this.
I need a well done example of it working in action. And looking at how other libs like monogame and such do it isn't much help, because all I'm interested in is the GL code, and it has to have no deprecated stuff in it.
Basically I want to be able to efficiently render thousands+ of sprites, all having textures. The texture is just a spritesheet, so I just need to tell it to render a region of that spritesheet.
I'm disappointed in the amount of material available for programmable pipeline. To the point where it seems like it'd be so much easier to just say screw it and use fixed pipeline, even though I definitely don't want to do that.
So yeah, any full examples that do what I want? Or could somebody more knowledgable write one up? :)
A lot of the examples are "oh, here's how you render 1 triangle". Well that's great, except nobody needs to render only 1 triangle/quad. And they need to be textured in addition to that!
An example that uses VBOs/VAOs/EBOs
ALSO: this means the code can't use glTexPointer and that stuff, but just in raw VBOs/VAOs...
I saw this question and decided to write a little program that does some "sprite" rendering using points and gl_PointSize. I'm not quite sure what you mean by "batching" as opposed to "instancing," but my program uses the glDrawArraysInstanced() call so that I can render multiple points without needing my VBO to be variable sized. My code also doesn't texture the sprites, but that's easy enough to add in (upload the active texture index (the one that was active during your call to glTexSubImage), to a GLSL sampler2D using glUniform1i).
Anyway, here's the program I wrote: http://litherum.blogspot.com/2013/02/sprites-in-opengl-programmable-pipeline.html Hope you can learn from it!

How to avoid tearing with pygame on Linux/X11

I've been playing with pygame (on Debian/Lenny).
It seems to work nicely, except for annoying tearing of blits (fullscreen or windowed mode).
I'm using the default SDL X11 driver. Googling suggests that it's a known issue with SDL that X11 provides no vsync facility (even with a display created with FULLSCREEN|DOUBLEBUF|HWSURFACE flags), and I should use the "dga" driver instead.
However, running
SDL_VIDEODRIVER=dga ./mygame.py
throws in pygame initialisation with
pygame.error: No available video device
(despite xdpyinfo showing an XFree86-DGA extension present).
So: what's the trick to getting tear-free vsynced flips ? Either by getting this dga thing working or some other mechanism ?
The best way to keep tearing to a minimum is to keep your frame rate as close to the screen's frequency as possible. The SDL library doesn't have a vsync unless you're running OpenGL through it, so the only way is to approximate the frame rate yourself.
The SDL hardware double buffer isn't guaranteed, although nice when it works. I've seldomly seen it in action.
In my experience with SDL you have to use OpenGL to completely eliminate tearing. It's a bit of an adjustment, but drawing simple 2D textures isn't all that complicated and you get a few other added bonuses that you're able to implement like rotation, scaling, blending and so on.
However, if you still want to use the software rendering, I'd recommend using dirty rectangle updating. It's also a bit difficult to get used to, but it saves loads of processing which may make it easier to keep the updates up to pace and it avoids the whole screen being teared (unless you're scrolling the whole play area or something). As well as the time it takes to draw to the buffer is at a minimum which may avoid the blitting taking place while the screen is updating, which is the cause of the tearing.
Well my eventual solution was to switch to Pyglet, which seems to support OpenGL much better than Pygame, and doesn't have any flicker problems.
Use the SCALED flag and vsync=True when calling set_mode and you should be all set (at least on any systems which actually support this; in some scenarios SDL still can't give you a VSync-capable surface but they are increasingly rare).

Antialiasing alternatives

I've seen antialiasing on Windows using GDI+, Java and also that provided by Photoshop and Gimp. Are there any other libraries out there which provide antialiasing facility without depending on support from the host OS?
Antigrain Geometry provides anti-aliased graphics in software.
As simon pointed out, the term anti-aliasing is misused/abused quite regularly so it's always helpful to know exactly what you're trying to do.
Since you mention GDI, I'll assume you're talking about maintaining nice crisp edges when you resize them - so something like a character in a font looks clean and not pixelated when you resize it 2x or 3x it's original size. For these sorts of things I've used a technique in the past called alpha-tested magnification - you can read the whitepaper here:
http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf
When I implemented it, I used more than one plane so I could get better edges on all types of objects, but it covers that briefly towards the end. Of all the approaches (that I've used) to maintain quality when scaling vector images, this was the easiest and highest quality. This also has the advantage of being easily implemented in hardware. From an existing API standpoint, your best bet is to use either OpenGL or Direct3D - that being said, it really only requires bilinear filtered and texture mapped to accomplish what it does, so you could roll your own (I have in the past). If you are always dealing with rectangles and only need to do scaling it's pretty trivial, and adding rotation doesn't add that much complexity. If you do roll your own, make sure to pay particular attention to subpixel positioning (how you resolve pixel positions that do not fall on a full pixel, as this is critical to the quality and sometimes overlooked.
Hope that helps!
There are (often misnamed, btw, but that's a dead horse) many anti-aliasing approaches that can be used. Depending on what you know about the original signal and what the intended use is, different things are most likely to give you the desired result.
"Support from the host OS" is probably most sensible if the output is through the OS display facilities, since they have the most information about what is being done to the image.
I suppose that's a long way of asking what are you actually trying to do? Many graphics libraries will provide some form of antialiasing, whether or not they'll be appropriate depends a lot on what you're trying to achieve.

Resources