Allegro 5 game: How do I set a resolution that is appropriate for the aspect ratio of the screen? - fullscreen

Using Allegro 5, how do initialize a game in fullscreen mode so that it respects the format of the screen (widescreen 16:9 vs normal 3:4)
al_create_display (w, h)
Let's you select any ratio you want. For example you can set 640x480, regardless of the screen size. But it will look weird on a widescreen monitor.
How do you know which ratio to use?

Hm, I can answer this as well - use al_get_monitor_info().
al_get_monitor_info(0, &info);
w = info.x2 - info.x1; /* Assume this is 1366 */
h = info.y2 - info.y1; /* Assume this is 768 */
al_create_display(w, h);
Now you can either render everything in a 640x480 rectangle centered within 1366x768 to make it appear pixel-perfect, or alternatively scale your graphics up by 768/480 and keep two black bars to the left and right. If you use OpenGL for rendering, both are very easy to do by simply altering the projection matrix.

Related

Setting the size of a SVG file (via Batik)

If I render to a bitmap then the bitmap has a specific number of pixels and a DPI. That combination makes it easy to draw a square that is 1" x 1" - I render lines for each side that are DPI pixels long.
When I create a SVG, I think it should still be able to be set this way. Where I set the units per inch and also the size in those units of the object as a whole. Yes you can zoom on a SVG file as it's all vectors, but it should still have a 100% zoom size to render to.
In my case I am using EMUs for my units. So 914400 units/inch. So question #1 is, how do I set the scaling using Batik. For a bitmap it's:
AffineTransform scaleToEmus = AffineTransform.getScaleInstance(dpi / (float) DrawingSurface.EPI, dpi / (float) DrawingSurface.EPI);
graphics.transform(scaleToEmus);
But there is no dpi equivalent for SVG.
And then for a given width & height that is in EMUs, do I set the size (or maximum extent) of the image using:
svgGraphics.setSVGCanvasSize(new Dimension(width, height));
I think I'm not fully understanding SVG, or at least Batik as I don't see how to set the units to render at for a 100% zoom.

How computers draw transparency?

In real life, transparency (or opacity) can be explained in a "simple" way by how much an object can reflect light, or how much of it pass through. So if an object is transparent light pass trough it, reflect on whatever is behind it and the light get back to us.
How computers simulate this behavior? I mean, we as developers, have many abstractions and APIs to set alpha levels and opacities of our pixels but how computers translates this into a bitmap to the screen?
What I think is happening: Both back and front colors are "combined" to result in a third color and this is then draw to screen. Eg: transparent white over back red on screen will be painted as pink!
Yes, you have it right. The "back" color is combined with the "front" color in proportion to the opacity of the front color.
For a single color channel, e.g. red, with opacity from 0 to 1:
new = old * (1 - opacity) + front * opacity

Ignoring touches on transparent areas cocos2dx

I have an image of size 480x800 pixels and there is a icon on one corner which I need to place. What I want is that to ignore all touches on the transparent areas and detect only the area where the icon is.
I found a solution in SO to this problem but it just tells the code to be used. I need to know exactly where to put that code since I am a beginner and don't know much about cocos2d so I expect a step by step solution.
Cocos2d 2.0 - Ignoring touches to transparent areas of layers/sprites
Do not use glReadPixels because it affected by bugs in android drivers. You can translate CCTouch to CCPoint in image coordinates using convertTouchToNodeSpace, and read image pixel at given point.
Create CCImage from file that contains semi-transparent picture, and read one pixel at tap point; it should be {0,0,0,0} for transparent area.
Don't forget to check that tap is not outside picture, and create pixel index in CCImage::getData() array with formulae unsigned index = x * imageWidth + y.

webgl: white border when using transparency (alpha)

When rendering textures that have an alpha-channel, a white border appears around the non-transparent part (the border seems to be the pixels that have an alpha > 0 and < 1):
The original texture is created in illustrator and exported as a png. here it is:
(well, seems stackoverflow altered the image, adjusting pixels that are not completely opaque/transparent, so here is a link)
it is probably the blending, though i dont know what is wrong with the setup:
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
[Update]
Here is a rendered version, where i added a alpha-gradient to the left part of the texture (so it is getting from 0 opacity to 1 until the half)
this texture is the only texture rendered at this position. it seems to be whitest around a=0.5. really weird. the background is just a cleared color:
gl.clearColor(0.603, 0.76, 0.804, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// render objects here
the depth-function looks like:
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LEQUAL);
any ideas? thanks a lot.
[Update 2]
Answering my own question: the effect occurs when the background-color of the canvas or the body of the html-page is white. I don't have an explanation, though.
Use premultiplied alpha and this problem will go away.
See: http://home.comcast.net/~tom_forsyth/blog.wiki.html#%5B%5BPremultiplied%20alpha%5D%5D
This is problem related to texturing linear interpolation. On borders, some interpolated pixels will take half white half green, and 0.5 alpha. You should modify your texture to extend your borders with one more green pixel, even if it is totally transparent.
What's your draw order? This looks like a depth buffering issue to me — you start with a white background, draw the thing with the border so that it's composited on the white, then draw the thing behind the thing with the border. Those areas where the border was blended with the original white background will have stored a value in the depth buffer equal to the depth of their plane, so when the object behind is subsequently drawn, its pixels are discarded in that area.
The general rule is to draw transparent objects after opaque objects, usually from back to front. If you're using additive blending then it's often good enough to disable the depth buffer after the opaque draw and draw them in any order.
When setting the FragColor in the shader, try multiplying the image RGB with the image alpha.

SetWorldTransform() and font rotation

I'm trying to display text on a Windows control, rotated by 90 degrees, so that it reads from 'bottom to top' so to speak; basically it's the label on the Y axis of a graph.
I got my text to display vertically by changing my coordinate system for the DC by using SetGraphicsMode(GM_ADVANCED) and then using
XFORM transform;
const double angle = 90 * (boost::math::constants::pi<double>() / 180);
transform.eM11 = (FLOAT)cos(angle);
transform.eM12 = (FLOAT)(-sin(angle));
transform.eM21 = (FLOAT)sin(angle);
transform.eM22 = (FLOAT)cos(angle);
transform.eDx = 0.0;
transform.eDy = 0.0;
dc.SetWorldTransform(&transform);
Now when I run my program, the rotated text looks different from the same text when it's shown 'normally' (horizontally). I've tried with a fixed-width (system) font and the default WinXP font. The system font comes out look anti-aliased and the other one looks almost as if it's being drawn in a 1-pixel smaller font than the horizontal version, although they are drawn using the same DC and with no font changes in between. It looks as if Windows detects that I'm drawing a font not along the normal (0 degrees) axis and that it's trying to 'optimize' by anti-aliasing.
Now I don't want any of that. I just want to same text that I draw horizontally to be drawn exactly the same, except 90 degrees rotated, which is possible since it's a rotation of exactly 90 degrees. Does anyone know what's going on and whether I can change this easily to work as I want? I'd hate to have gone through all this trouble and finding up that I will have to resort to rendering to an off-screen bitmap, rotating it using a simple pixel-by-pixel rotation and having to bitblt that into my control :(
Have you tried setting the nEscapement and nOrientation parameters when you create the font instead of using SetWorldTransform? See CreateFont for details.

Resources