In real life, transparency (or opacity) can be explained in a "simple" way by how much an object can reflect light, or how much of it pass through. So if an object is transparent light pass trough it, reflect on whatever is behind it and the light get back to us.
How computers simulate this behavior? I mean, we as developers, have many abstractions and APIs to set alpha levels and opacities of our pixels but how computers translates this into a bitmap to the screen?
What I think is happening: Both back and front colors are "combined" to result in a third color and this is then draw to screen. Eg: transparent white over back red on screen will be painted as pink!
Yes, you have it right. The "back" color is combined with the "front" color in proportion to the opacity of the front color.
For a single color channel, e.g. red, with opacity from 0 to 1:
new = old * (1 - opacity) + front * opacity
Related
I got a RGB332 LCD and a poor MCU to drive it . The MCU do not have a hardware accelerator nor do RGB332 display support an alpha path.
So I used the color "black" as a "alpha color" to deal with icon paste work.Which means I fill the icon color data to background buffer while the data is not black.
The problem I meet is that the icon showed it's own antialiased edge while the background is not black. And the "antialiased edge" just makes an edge effect from the background.
Is there any way to deal with the situation ?
The main problem is that I don't have "Layer" and "Alpha" to do the PS-like merge work.
But the Icons are Pasted to a Frame buffer one by one.
So my solution is :
When each icon is being pasted,I could decide the front/background,
which means I could detect the "antialiased edge" of the icons just
like I have "layers".
After I find the antialiased edges ,I filled the pixels with the
middle color of the front/background.
The LCD is RGB332,and the middle color calculation is just filling
the edge with 75% background color + 25% front color. If the icon
color is carefully designed, you don't even need a float calculation
.
The work maybe not that effective ,but really solved my problem.
I am refering to a older question saying color blending with GDI+
Using GDI+ with Windows Forms, I want to be able to draw with a pen and blend color based on the destination pixel color.
For example, if I draw a line and it passes over black pixels, I want it to be a lighter color (like white for example) so that it's visible. When that same line passes over white pixels, it should be a darker color (black for example) so that it's still clearly visible.
the answers says to use a color matrix for transformation
so i started implementing it..
My image is present in raw data format in rgb48
Gdiplus::Bitmap image(input.width,input.height,input.width*6,PixelFormat48bppRGB,(unsigned char*)rgb48);
Gdiplus::Image *images= image.GetThumbnailImage(input.width,input.height);
Gdiplus::TextureBrush brush(images);
Gdiplus::Pen pen(&brush);
Gdiplus::ColorMatrix matrix={
-1.0f,0.0f,0.0f,0.0f,0.0f,
0.0f,-1.0f,0.0f,0.0f,0.0f,
0.0f,0.0f,-1.0f,0.0f,0.0f,
0.0f,0.0f,0.0f,1.0f,0.0f,
1.0f,1.0f,1.0f,0.0f,1.0f,
};
Gdiplus::Graphics gfx(&image1);
Gdiplus::ImageAttributes imageAttr;
imageAttr.SetColorMatrix(&matrix);
gfx.DrawImage(images,Gdiplus::Rect(0,0,input.width,input.height),0,0,1024,1024,Gdiplus::UnitPixel,&imageAttr);
I am not getting what i expect..Can some one help me in finding the mistake i m doing.
You can use the alpha component of a color to specify transparency, so that colors can be combined. However, if you want the combination to be something other than the alpha blend, you must draw the pixels yourself. You could first draw into a transparent bitmap, then render that onto the destination pixel by pixel.
When rendering textures that have an alpha-channel, a white border appears around the non-transparent part (the border seems to be the pixels that have an alpha > 0 and < 1):
The original texture is created in illustrator and exported as a png. here it is:
(well, seems stackoverflow altered the image, adjusting pixels that are not completely opaque/transparent, so here is a link)
it is probably the blending, though i dont know what is wrong with the setup:
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
[Update]
Here is a rendered version, where i added a alpha-gradient to the left part of the texture (so it is getting from 0 opacity to 1 until the half)
this texture is the only texture rendered at this position. it seems to be whitest around a=0.5. really weird. the background is just a cleared color:
gl.clearColor(0.603, 0.76, 0.804, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// render objects here
the depth-function looks like:
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LEQUAL);
any ideas? thanks a lot.
[Update 2]
Answering my own question: the effect occurs when the background-color of the canvas or the body of the html-page is white. I don't have an explanation, though.
Use premultiplied alpha and this problem will go away.
See: http://home.comcast.net/~tom_forsyth/blog.wiki.html#%5B%5BPremultiplied%20alpha%5D%5D
This is problem related to texturing linear interpolation. On borders, some interpolated pixels will take half white half green, and 0.5 alpha. You should modify your texture to extend your borders with one more green pixel, even if it is totally transparent.
What's your draw order? This looks like a depth buffering issue to me — you start with a white background, draw the thing with the border so that it's composited on the white, then draw the thing behind the thing with the border. Those areas where the border was blended with the original white background will have stored a value in the depth buffer equal to the depth of their plane, so when the object behind is subsequently drawn, its pixels are discarded in that area.
The general rule is to draw transparent objects after opaque objects, usually from back to front. If you're using additive blending then it's often good enough to disable the depth buffer after the opaque draw and draw them in any order.
When setting the FragColor in the shader, try multiplying the image RGB with the image alpha.
I have a randomly colored background that is split into solid colored rectangles. I want to draw a grid over the rectangles (this is not the problem). The issue is because of the random colors I cannot hard-code the grid color because it may not show up.
Another way to think about this is plotting a grid on a plot of a surface f(x,y). If the grid color happens to be the same color of the function (however it is defined) then it won't be visible.
I would like to take the background color and compute a new color (either grayscale or similar to the background color) that is contrasted with the color so it can easily be seen (but not distracting such as pure white on pure black).
I've tried using the luminance and weighted luminance but it doesn't work well for all colors. I've also tried gamma correcting the colors but it also does not work well.
I would also like the grid color to be as uniform as possible (I could possibly compute the adjacent grid colors to blend in). It is not that important but would be nice to have some uniformity.
The code I'm working with is based around
//byte I = (byte)(0.2*R + 0.7*G + 0.1*B);
//byte I = (byte)(R + G + B)/3.0);
byte I = (byte)(Math.Max(Bar.Background.R, Math.Max(Bar.Background.G, Bar.Background.B)));
if (I < 120)
I = (byte)(I + 30);
else
I = (byte)(I - 30);
//I = (byte)(Math.Pow(I/255.0, 1/2.0)*255);
I've also tried gamma correcting the rgb's first.
Anyone have any ideas?
The colors that offer the most contrast are colors that are fully saturated. This offers you a way to find color that may work(but not necessarily for many reasons). Essentially you pick the color the furthest away along the line connecting color and the fully saturated color.
I'm not sure how to ask this but here goes.
I draw a filled coloured rectangle on screen. The colour is in form of R,G,B
I then want to draw text on top of the rectangle, however the colour of the text has to be such that it provides the best contrast, meaning it's readable.
Example:
If I draw a black rectangle, the obvious colour for text would be white.
What I tried right now is this. I pass this function the colour of the rectangle and it returns an inverted colour that I then use for my text.
It works, but it's not the best way.
Any suggestions?
// eg. usage: Color textColor = GetInverseLuminance(rectColor);
private Color GetInverseLuminance(Color color)
{
int greyscale = (int)(255 - ((color.R * 0.30f) + (color.G * 0.59f) + (color.B * 0.11f)));
return Color.FromArgb(greyscale, greyscale, greyscale);
}
One simple approach that is guaranteed to give a significantly different color is to toggle the top bit of each component of the RGB triple.
Color inverse(Color c)
{
return new Color(c.R ^ 0x80, c.G ^ 0x80, c.B ^ 0x80);
}
If the original color was #1AF283, the "inverse" will be #9A7203.
The contrast will be significant. I make no guarantees about the aesthetics.
Update, 2009/4/3: I experimented with this and other schemes. Results at my blog.
The most readable color is going to be either white or black. The most 'soothing' color will be something that is not white nor black, it will be a color that lightly contrasts your background color. There is no way to programmatically do this because it is subjective. You will not find the most readable color for everyone because everyone sees things differently.
Some tips about color, particularly concerning foreground and background juxtaposition, such as with text.
The human eye is essentially a simple lens, and therefore can only effectively focus on one color at a time. The lenses used in most modern cameras work around this problem by using multiple lenses of different refractive indexes (chromatic lenses) so that all colors are in focus at one time, but the human eye is not that advanced.
For that reason, your users should only have to focus on one color at a time to read the text. This means that either the foreground is in color, or the background, but never both. This leads to a condition typically called vibration, in which the eye rapidly shifts focus between foreground and background colors, trying to resolve the shape, but it never resolves, the shape is never in focus, and it leads to eyestrain.
Your function won't work if you supply it with RGB(127,127,127), because it will return the exact same colour. (modifying your function to return either black or white would slightly improve things)
The best way to always have things readable is to have white text with black around it, or the other way around.
It's oftenly achieved by first drawing black text at (x-1,y-1),(x+1,y-1),(x+1,y-1),(x+1,x+1), and then white text at (x,y).
Alternatively, you could first draw a semi-transparent black block, and then non-transparent white text over it. That ensures that there will always be a certain amount of contrast between your background and your text.
why grey? either black or white would be best. white on dark colors, black on light colors. just see if luminance is above a threshold and pick one or the other
(you don't need the .net, c# or asp.net tags, by the way)
You need to study some color theory. A program called "Color Wheel Pro" is fun to play around with and will give you the general idea.
Essentially, you're looking for complimentary colors for a given color.
That said, I think you will find that while color theory helps, you still need a human eye to fine tune.