I have sprites that when they overlap I would like them to 'add' their colors rgb values to (potentially) go white, the sprites also have changin alpha values which should remain unchanged. I've already tried using all the spritebatch options alphablend, additive etc...
Is this possible through spritebatch or will i need a shader?
Thanks,
Paul.
Using the Premultiplied Alpha scheme in XNA 4, you can do additive blending by having your texture drawn at 0 alpha. Because this means that there is 0 blocking done by the texture, and then the RGB is added to the pixels behind it, you get additive blending.
Just draw the texture with 0 alpha using the spritebatch mode 'Alphablend'. To lower the additivity, increase alpha. To make it less visible, lower RGB.
I highly suggest making sure any textures in your content have the option 'Premultiplied Alpha' ticked in their properties, if you use this.
Related
I am currently working on a freehand drawing application where I need to support multiple background textures. For example, a paper-like texture or an image. The drawing on both background textures should behave identically.
It works great when using one or the other texture as a starting point for the drawing, i.e. blend all incoming strokes directly with the background texture.
However, I want to take a different approach: Drawing all strokes to an initially transparent layer, then blending this layer with the selected background. This has the advantage that the drawing is independent and separated from the background. For example I could blend the whole drawing with a different background without having to blend all strokes with this background directly.
The problem is: Depending on the color of the transparent layer, the outcome of the blended image (background + "stroke layer") looks totally different. For example, with rgba values, setting the transparent layer to transparent white (1,1,1,0) yields much brighter colors than setting the layer to transparent black (0,0,0,0). This makes sense because we have to blend the strokes with the transparent color. What I basically want to have is a "neutral" transparency. The strokes on this transparent layer should interact only with the background image, not with the transparent layer. The transparent layer should only be used to store the drawn strokes.
My question: Is this somehow possible? I can't find a way to solve this. The problem is that the transparent layer (which is just a texture with a transparent color) must have a color and incoming strokes must be blended with this color. Is there a way to avoid this somehow?
I figured out how to do it:
For blending two transparent colors, the Porter-Duff algorithm can be used. It is described here for example. Blending destination and source color can be done this way:
inline float4 porter_duff_blending(float4 dest, float4 source) {
float alpha = source.a;
float inv_alpha = 1 - alpha;
float blend_alpha = alpha + inv_alpha * dest.a;
float4 blend_color = (1.0 / blend_alpha) * ((alpha * source) + (inv_alpha * dest.a * dest));
blend_color.a = blend_alpha;
return blend_color;
};
This allows to have the drawing in a separate transparent layer which can be applied to different backgrounds.
I am trying to figure out how to achieve a smooth transition between two colors.
I.E. this image is taken from Wikipedia.
When I try to do the same using my code (C++), first idea that came to mind is using the HSV color space, but the annoying in-between colors show-up.
What is the good way to achieve this ?
This is going to sound weird, maybe... but vertex shaders will do this nicely. If that's a quad (two tris) then place one colour on the left 2 vertices, and the other on the right, and it should blend across nicely.
Caveat: Assumes you're using some kind of OpenGL.
The only part of your question I feel I can answer is that you must somehow be transitioning through too many values in the H part of HSV.
H is for hue (different colors, like the rainbow effect in your gradient). In this case, it looks to me like you are only merging 2 different hues.
S is for saturation (strength of color from highly saturated to
gray)
L is for lightness (more or less luminosity (from your
color to its most white)
This is caused by a lack of color in between, as black (or grey in your case) = desaturated. It is like putting two transparent fade images together, there is a see through area in the middle as 2 50% transparencies don't equal 100% solid color.
To avoid this, I'd suggest placing one color above the other and fading that to transparent. That way there is a solid color base with the transition above.
I dont know what your using to display (DirectX, Windows display or whatever ) but try just having two images, one solid color and a single color with a fade from solid to transparent infront. That might work.
Having some issues with smooth alpha gradients in texture files resulting in bad banding issues.
I have a 2D XNA WP7 game and I've come up with a fairly simple lighting system. I draw the areas that would be lit by the light in a separate RenderTarget2D, apply a sprite to dim the edges as you get further away from the light, then blend that final lighting image with the main image to make certain areas darker and lighter.
Here's what I've got so far:
As you can see, the banding is pretty bad. The alpha transparency is quite smooth in the source image, but whenever I draw the sprite, it gets these huge ugly steps between colors. Just to check I drew the spotlight mask straight onto the scene with normal alpha blending and I still got the banding.
Is there any way to preserve smooth alpha gradients when drawing sprites?
Is there any way to preserve smooth alpha gradients when drawing sprites?
No, you cannot. WP7 phones currently use 16 bit color range system. One pixes got: 5 red bits, 5 blue, 6 green (humans see a wider spectrum of green color).
Found out that with Mango, apps can now specify that they support 32bpp, and it will work on devices that support it!
For XNA, put this line at the top of OnNavigatedTo:
SharedGraphicsDeviceManager.Current.PreferredBackBufferFormat = SurfaceFormat.Color;
For Silverlight add BitsPerPixel="32" to the App element in WMAppManifest.xml.
I'm having a problem when rendering even simple shapes with partial opacity to QGLFrameBufferObjects in Qt.
I have reduced the problem down to this:
When I render a simple quad to a QGLFrameBufferObject with color set to (1,0,0,.5), and then blit that to the screen, I get a result that is way too light a red for 50% opacity. If I draw the same quad with the same color (same code, in fact) directly to the screen, I get the correct color value. If I render the quad with opacity == 1.0, then the results are the same...I get a full, deep red in both cases. I've confirmed that the color is really wrong in the buffer by dumping the buffer to disk directly with buffer.toImage().save("/tmp/blah.tif").
In both cases, I've cleared the output buffer to (1,1,1,1) before performing the operation.
Why are things I draw that are partially transparent coming out lighter when drawn to an offscreen buffer than if I draw them right to the screen? There must be some state that I have to set on the FBO or something, but I can't figure out what it is.
Alpha does not mean "transparent." It doesn't mean anything at all. It only takes up a meaning when you give it one. It only mean "transparent" when you set up a blend mode that uses alpha to control transparency. So if you didn't set up a blend mode that creates the effect of transparency, then alpha is just another color component that will be written exactly as is to the framebuffer.
How can I convert a grayscale value (0-255) to an RGB value/representation?
It is for using in an SVG image, which doesn't seem to come with a grayscale support, only RGB...
Note: this is not RGB -> grayscale, which is already answered in another question, e.g. Converting RGB to grayscale/intensity)
The quick and dirty approach is to repeat the grayscale intensity for each component of RGB. So, if you have grayscale 120, it translates to RGB (120, 120, 120).
This is quick and dirty because the effective luminance you get depends on the actual luminance of the R, G and B subpixels of the device that you're using.
If you have the greyscale value in the range 0..255 and want to produce a new value in the form 0x00RRGGBB, then a quick way to do this is:
int rgb = grey * 0x00010101;
or equivalent in your chosen language.
Conversion of a grayscale to RGB is simple. Simply use R = G = B = gray value. The basic idea is that color (as viewed on a monitor in terms of RGB) is an additive system.
http://en.wikipedia.org/wiki/Additive_color
Thus adding red to green yields yellow. Add in some blue to that mix in equal amounts, and you get a neutral color. Full on [red, green, blue] = [255 255 255] yields white. [0,0,0] yields monitor black. Intermediate values, when R=G=B are all equal will yield nominally neutral colors of the given level of gray.
A minor problem is depending on how you view the color, it may not be perfectly neutral. This will depend on how your monitor (or printer) is calibrated. There are interesting depths of color science we could go into from this point. I'll stop here.
Grey-scale means that all values have the same intensity. Set all channels (in RGB) equal to the the grey value and you will have the an RGB black and white image.
Woudln't setting R,G,and B to the same value (the greyscale value) for each pixel get you a correct shade of gray?
You may also take a look at my solution Faster assembly optimized way to convert RGB8 image to RGB32 image. Gray channel is simply repeated in all other channels.
The purpose was to find the fasted possible solution for conversion using x86/SSE.