I'm trying to get a simple effect to display using WebGL. Naturally, this means I'm using the fragment shader language defined in the GLSL ES 1.0 specification.
The code I am working with is largely copied from other sources. It sets up a square and uses the fragment and vertex shaders to determine the pixel colors. The following code will just display a white square.
gl_FragColor = vec4(0.0, 0.0, 0.0, 0.0);
However, if I change the alpha component to 1.0 then it will show a black square instead.
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0); // displays a black square
I'm assuming that the color that is output by the fragment shader must be combined with some previous color. How do I make sure that only the last color (regardless of its alpha value) is what is actually chosen as the color to be displayed?
Or perhaps I've got the order the wrong way around. Perhaps there is a later phase that is combining with the color from the fragment shader to produce the white color. In any case, I know that some kind of blending is going on because when I change the alpha value to 0.5 I get a grey square. I just want to know, where is the white color coming from? And how do I get rid of it?
As far as I can tell the problem is not to do with the blending function. The code is on GitHub here. Try it out in Google Chrome or Firefox.
The white color is not coming from any WebGL blending operations. It happens because canvas DOM elements are composited with the web page they reside in. Setting the background color of the canvas DOM element to black fixes the problem. This phenomenon has been written about here. However, the most definitive answer comes from the WebGL specification. The behaviour of compositing can be defined by setting various attributes of the WebGLContextAttributes object. Quoting from the specification:
The optional WebGLContextAttributes
object may be used to change whether
or not the buffers are defined. It can
also be used to define whether the
color buffer will include an alpha
channel. If defined, the alpha channel
is used by the HTML compositor to
combine the color buffer with the rest
of the page.
Thus, an alternative solution, that does not involve setting the background of the canvas to black, is to ask for the WebGL context with the following Javascript code
gl = canvas.getContext("experimental-webgl", { alpha: false } );
It would be helpful if you posted code, but it seems very much like you are blending your fragment color with a white background.
A possible culprit would be some WebGL initialization code that looks like:
gl.clearColor(1.0, 1.0, 1.0, 1.0);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.BLEND);
which initializes the color buffer clear color to white and enables alpha blending.
So to get rid of your problem, just don't enable blending. In other words, lose the gl.enable(gl.BLEND); statement.
Update:
Now that you've posted your code, it sounds like your problem is the opposite of what I was speculating. Assuming that you want the black background to show through according to the alpha that you set, you should replace your gl.disable(gl.BLEND) with
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
gl.enable(gl.BLEND);
I know NOTHING about this topic (glsl, webgl, etc.), but you said alpha which makes me think that's an RGBA value. So a 0.0, 0.0, 0.0, 0.0 is black, with 0.0 alpha coverage. In other words, totally transparent.
0.0, 0.0, 0.0, 1.0 is black with 1.0 alpha coverage. In other words, totally opaque. It sounds like the output is 100% correct.
Let me clarify, just in case the above wasn't clear (it seemed confusing to me).
The last number is just affecting the transparency of the color, the first three numbers are affecting the Red, Green, and Blue respectively.
I'm not sure of their max/min values, but in web browsers (my area of knowledge), the colors go from 0 to 255. Where 0 is none of that color, and 255 is all of that color.
So in your case you are requesting a square with 0 red, 0 green, and 0 blue (which is just black). However, when you set the transparency (the alpha channel) to zero, it renders as white (or more likely - transparent, and whatever is behind it is white). When you set the alpha transparency to .5 it renders Black (like 0,0,0 should) but only half visible. When you set the alpha transparency to 1.0, it renders a fully opaque, black square.
Make sense?
Related
I draw to a texture with a fragment shader in opengl.
I set my color to 100% red and 50% opacity, but when i then read this color i discover that it is no longer 100% red.
The same can be noticed with gimp.
I choose 100% red color but draw it with 50% opacity, when i then use the color picker tool, it tells me the red color is only 80%.
Is there a way to preserve the color value in opengl es 2.0?
The color is modified by the Bending function and operation. You have to disable blending.
There is no opacity, there is just an alpha channel. The alpha channel and the blend function define how a source color is mixed (blended) with the color in the target buffer. Hence if blending is enabled, then the final color is equal to the source color. If blending is disabled, the the color and the alpha channel are copied to the target without manipulation.
I am refering to a older question saying color blending with GDI+
Using GDI+ with Windows Forms, I want to be able to draw with a pen and blend color based on the destination pixel color.
For example, if I draw a line and it passes over black pixels, I want it to be a lighter color (like white for example) so that it's visible. When that same line passes over white pixels, it should be a darker color (black for example) so that it's still clearly visible.
the answers says to use a color matrix for transformation
so i started implementing it..
My image is present in raw data format in rgb48
Gdiplus::Bitmap image(input.width,input.height,input.width*6,PixelFormat48bppRGB,(unsigned char*)rgb48);
Gdiplus::Image *images= image.GetThumbnailImage(input.width,input.height);
Gdiplus::TextureBrush brush(images);
Gdiplus::Pen pen(&brush);
Gdiplus::ColorMatrix matrix={
-1.0f,0.0f,0.0f,0.0f,0.0f,
0.0f,-1.0f,0.0f,0.0f,0.0f,
0.0f,0.0f,-1.0f,0.0f,0.0f,
0.0f,0.0f,0.0f,1.0f,0.0f,
1.0f,1.0f,1.0f,0.0f,1.0f,
};
Gdiplus::Graphics gfx(&image1);
Gdiplus::ImageAttributes imageAttr;
imageAttr.SetColorMatrix(&matrix);
gfx.DrawImage(images,Gdiplus::Rect(0,0,input.width,input.height),0,0,1024,1024,Gdiplus::UnitPixel,&imageAttr);
I am not getting what i expect..Can some one help me in finding the mistake i m doing.
You can use the alpha component of a color to specify transparency, so that colors can be combined. However, if you want the combination to be something other than the alpha blend, you must draw the pixels yourself. You could first draw into a transparent bitmap, then render that onto the destination pixel by pixel.
In real life, transparency (or opacity) can be explained in a "simple" way by how much an object can reflect light, or how much of it pass through. So if an object is transparent light pass trough it, reflect on whatever is behind it and the light get back to us.
How computers simulate this behavior? I mean, we as developers, have many abstractions and APIs to set alpha levels and opacities of our pixels but how computers translates this into a bitmap to the screen?
What I think is happening: Both back and front colors are "combined" to result in a third color and this is then draw to screen. Eg: transparent white over back red on screen will be painted as pink!
Yes, you have it right. The "back" color is combined with the "front" color in proportion to the opacity of the front color.
For a single color channel, e.g. red, with opacity from 0 to 1:
new = old * (1 - opacity) + front * opacity
When rendering textures that have an alpha-channel, a white border appears around the non-transparent part (the border seems to be the pixels that have an alpha > 0 and < 1):
The original texture is created in illustrator and exported as a png. here it is:
(well, seems stackoverflow altered the image, adjusting pixels that are not completely opaque/transparent, so here is a link)
it is probably the blending, though i dont know what is wrong with the setup:
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
[Update]
Here is a rendered version, where i added a alpha-gradient to the left part of the texture (so it is getting from 0 opacity to 1 until the half)
this texture is the only texture rendered at this position. it seems to be whitest around a=0.5. really weird. the background is just a cleared color:
gl.clearColor(0.603, 0.76, 0.804, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// render objects here
the depth-function looks like:
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LEQUAL);
any ideas? thanks a lot.
[Update 2]
Answering my own question: the effect occurs when the background-color of the canvas or the body of the html-page is white. I don't have an explanation, though.
Use premultiplied alpha and this problem will go away.
See: http://home.comcast.net/~tom_forsyth/blog.wiki.html#%5B%5BPremultiplied%20alpha%5D%5D
This is problem related to texturing linear interpolation. On borders, some interpolated pixels will take half white half green, and 0.5 alpha. You should modify your texture to extend your borders with one more green pixel, even if it is totally transparent.
What's your draw order? This looks like a depth buffering issue to me — you start with a white background, draw the thing with the border so that it's composited on the white, then draw the thing behind the thing with the border. Those areas where the border was blended with the original white background will have stored a value in the depth buffer equal to the depth of their plane, so when the object behind is subsequently drawn, its pixels are discarded in that area.
The general rule is to draw transparent objects after opaque objects, usually from back to front. If you're using additive blending then it's often good enough to disable the depth buffer after the opaque draw and draw them in any order.
When setting the FragColor in the shader, try multiplying the image RGB with the image alpha.
I'm not sure how to ask this but here goes.
I draw a filled coloured rectangle on screen. The colour is in form of R,G,B
I then want to draw text on top of the rectangle, however the colour of the text has to be such that it provides the best contrast, meaning it's readable.
Example:
If I draw a black rectangle, the obvious colour for text would be white.
What I tried right now is this. I pass this function the colour of the rectangle and it returns an inverted colour that I then use for my text.
It works, but it's not the best way.
Any suggestions?
// eg. usage: Color textColor = GetInverseLuminance(rectColor);
private Color GetInverseLuminance(Color color)
{
int greyscale = (int)(255 - ((color.R * 0.30f) + (color.G * 0.59f) + (color.B * 0.11f)));
return Color.FromArgb(greyscale, greyscale, greyscale);
}
One simple approach that is guaranteed to give a significantly different color is to toggle the top bit of each component of the RGB triple.
Color inverse(Color c)
{
return new Color(c.R ^ 0x80, c.G ^ 0x80, c.B ^ 0x80);
}
If the original color was #1AF283, the "inverse" will be #9A7203.
The contrast will be significant. I make no guarantees about the aesthetics.
Update, 2009/4/3: I experimented with this and other schemes. Results at my blog.
The most readable color is going to be either white or black. The most 'soothing' color will be something that is not white nor black, it will be a color that lightly contrasts your background color. There is no way to programmatically do this because it is subjective. You will not find the most readable color for everyone because everyone sees things differently.
Some tips about color, particularly concerning foreground and background juxtaposition, such as with text.
The human eye is essentially a simple lens, and therefore can only effectively focus on one color at a time. The lenses used in most modern cameras work around this problem by using multiple lenses of different refractive indexes (chromatic lenses) so that all colors are in focus at one time, but the human eye is not that advanced.
For that reason, your users should only have to focus on one color at a time to read the text. This means that either the foreground is in color, or the background, but never both. This leads to a condition typically called vibration, in which the eye rapidly shifts focus between foreground and background colors, trying to resolve the shape, but it never resolves, the shape is never in focus, and it leads to eyestrain.
Your function won't work if you supply it with RGB(127,127,127), because it will return the exact same colour. (modifying your function to return either black or white would slightly improve things)
The best way to always have things readable is to have white text with black around it, or the other way around.
It's oftenly achieved by first drawing black text at (x-1,y-1),(x+1,y-1),(x+1,y-1),(x+1,x+1), and then white text at (x,y).
Alternatively, you could first draw a semi-transparent black block, and then non-transparent white text over it. That ensures that there will always be a certain amount of contrast between your background and your text.
why grey? either black or white would be best. white on dark colors, black on light colors. just see if luminance is above a threshold and pick one or the other
(you don't need the .net, c# or asp.net tags, by the way)
You need to study some color theory. A program called "Color Wheel Pro" is fun to play around with and will give you the general idea.
Essentially, you're looking for complimentary colors for a given color.
That said, I think you will find that while color theory helps, you still need a human eye to fine tune.