I am currently working on a freehand drawing application where I need to support multiple background textures. For example, a paper-like texture or an image. The drawing on both background textures should behave identically.
It works great when using one or the other texture as a starting point for the drawing, i.e. blend all incoming strokes directly with the background texture.
However, I want to take a different approach: Drawing all strokes to an initially transparent layer, then blending this layer with the selected background. This has the advantage that the drawing is independent and separated from the background. For example I could blend the whole drawing with a different background without having to blend all strokes with this background directly.
The problem is: Depending on the color of the transparent layer, the outcome of the blended image (background + "stroke layer") looks totally different. For example, with rgba values, setting the transparent layer to transparent white (1,1,1,0) yields much brighter colors than setting the layer to transparent black (0,0,0,0). This makes sense because we have to blend the strokes with the transparent color. What I basically want to have is a "neutral" transparency. The strokes on this transparent layer should interact only with the background image, not with the transparent layer. The transparent layer should only be used to store the drawn strokes.
My question: Is this somehow possible? I can't find a way to solve this. The problem is that the transparent layer (which is just a texture with a transparent color) must have a color and incoming strokes must be blended with this color. Is there a way to avoid this somehow?
I figured out how to do it:
For blending two transparent colors, the Porter-Duff algorithm can be used. It is described here for example. Blending destination and source color can be done this way:
inline float4 porter_duff_blending(float4 dest, float4 source) {
float alpha = source.a;
float inv_alpha = 1 - alpha;
float blend_alpha = alpha + inv_alpha * dest.a;
float4 blend_color = (1.0 / blend_alpha) * ((alpha * source) + (inv_alpha * dest.a * dest));
blend_color.a = blend_alpha;
return blend_color;
};
This allows to have the drawing in a separate transparent layer which can be applied to different backgrounds.
Related
In a game that I am currently developing I use textures with premultiplied alpha colors (the default in XNA 4). On that basis, I create hybrid blending textures, which means that I mix traditional color alphablending and additive blending in one texture.
Basically, this is achieved by creating textures that include pixels of solid (RGBA with A = 255), transparent (R'G'B'A with A < 255 and R' = R * A) and additive blending colors (R''G''B''A with A < 255 and R'' != R * A). Usually, pure additive blending assumes A = 0 but that is not required here. The textures are created in the game by a mathematical function.
Now the problem is that these textures work great in the game but there seems to be no software tool that is able to edit such textures. While the PNG format seems to be able to keep the color information intact, any attempt to edit or save such file destroys the colors. So far I tried Adobe Photoshop, Gimp, Corel Draw and Visual Studio 11.
What I ultimately want to do is load these textures in a software and edit them and possibly save them as DDS (creating mipmaps).
Commonly, techniques such as supersampling or multisampling are used to produce high fidelity images.
I've been messing around on mobile devices with CSS3 3D lately and this trick does a fantastic job of obtaining high quality non-aliased edges on quads.
The way the trick works is that the texture for the quad gains two extra pixels in each dimension forming a transparent one-pixel-wide outline outside the border. Due to texture sampling interpolation, so long as the transformation does not put the camera too close to an edge the effect is not unlike a pre-filtered antialiased rendering approach.
What are the conceptual and technical limitations of taking this sort of approach to render a 3D model, for example?
I think I already have one point that precludes using this kind of trick in the general case. Whenever geometry is not rectangular it does nothing to reduce aliasing: The fact that the result with a transparent 1px outline border is smooth for HTML5 with CSS3 depends on those elements being rectangular so that they rasterize neatly into a pixel grid.
The trick you linked to doesn't seem to have to do with texture interpolation. The CSS added a border that is drawn as a line. The rasterizer in the browser is drawing polygons without antialiasing and is drawing lines with antialiasing.
To answer your question of why you wouldn't want to blend into transparency over a 1 pixel border is that transparency is very difficult to draw correctly and could lead to artifacts when polygons are not drawn from back to front. You either need to presort your polygons based on distance or have opaque polygons that you check occlusion of using a depth buffer and multisampling.
Having some issues with smooth alpha gradients in texture files resulting in bad banding issues.
I have a 2D XNA WP7 game and I've come up with a fairly simple lighting system. I draw the areas that would be lit by the light in a separate RenderTarget2D, apply a sprite to dim the edges as you get further away from the light, then blend that final lighting image with the main image to make certain areas darker and lighter.
Here's what I've got so far:
As you can see, the banding is pretty bad. The alpha transparency is quite smooth in the source image, but whenever I draw the sprite, it gets these huge ugly steps between colors. Just to check I drew the spotlight mask straight onto the scene with normal alpha blending and I still got the banding.
Is there any way to preserve smooth alpha gradients when drawing sprites?
Is there any way to preserve smooth alpha gradients when drawing sprites?
No, you cannot. WP7 phones currently use 16 bit color range system. One pixes got: 5 red bits, 5 blue, 6 green (humans see a wider spectrum of green color).
Found out that with Mango, apps can now specify that they support 32bpp, and it will work on devices that support it!
For XNA, put this line at the top of OnNavigatedTo:
SharedGraphicsDeviceManager.Current.PreferredBackBufferFormat = SurfaceFormat.Color;
For Silverlight add BitsPerPixel="32" to the App element in WMAppManifest.xml.
I'm having a problem when rendering even simple shapes with partial opacity to QGLFrameBufferObjects in Qt.
I have reduced the problem down to this:
When I render a simple quad to a QGLFrameBufferObject with color set to (1,0,0,.5), and then blit that to the screen, I get a result that is way too light a red for 50% opacity. If I draw the same quad with the same color (same code, in fact) directly to the screen, I get the correct color value. If I render the quad with opacity == 1.0, then the results are the same...I get a full, deep red in both cases. I've confirmed that the color is really wrong in the buffer by dumping the buffer to disk directly with buffer.toImage().save("/tmp/blah.tif").
In both cases, I've cleared the output buffer to (1,1,1,1) before performing the operation.
Why are things I draw that are partially transparent coming out lighter when drawn to an offscreen buffer than if I draw them right to the screen? There must be some state that I have to set on the FBO or something, but I can't figure out what it is.
Alpha does not mean "transparent." It doesn't mean anything at all. It only takes up a meaning when you give it one. It only mean "transparent" when you set up a blend mode that uses alpha to control transparency. So if you didn't set up a blend mode that creates the effect of transparency, then alpha is just another color component that will be written exactly as is to the framebuffer.
I have sprites that when they overlap I would like them to 'add' their colors rgb values to (potentially) go white, the sprites also have changin alpha values which should remain unchanged. I've already tried using all the spritebatch options alphablend, additive etc...
Is this possible through spritebatch or will i need a shader?
Thanks,
Paul.
Using the Premultiplied Alpha scheme in XNA 4, you can do additive blending by having your texture drawn at 0 alpha. Because this means that there is 0 blocking done by the texture, and then the RGB is added to the pixels behind it, you get additive blending.
Just draw the texture with 0 alpha using the spritebatch mode 'Alphablend'. To lower the additivity, increase alpha. To make it less visible, lower RGB.
I highly suggest making sure any textures in your content have the option 'Premultiplied Alpha' ticked in their properties, if you use this.