How to antialias when drawing to semitransparent frame buffers in OpenGL? [duplicate] - python-3.x

This question already has answers here:
OpenGL default pipeline alpha blending does not make any sense for the alpha component
(2 answers)
Closed 2 years ago.
I'm currently learning OpenGL and try to make a simple GUI. So far I know very little about shaders and didn't use any.
One of the tricks I made to accelerate text rendering is to render the text quads to a transparent frame buffer object before rendering them to screen. The speedup is significant, but I noticed the text is poorly drawn on the edges. I noticed then if I made the transparent texture be of another transparent color, then the text would blend with that color. In the example I rendered to a transparent green texture:
I use the following parameters for blending:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE)
with glBlendEquation being default (GL_ADD).
My understanding from the documentation is that each pixel is sent through an equation that is source_rgb * blend_factor + dest_rgb * blend_factor.
I would typically want that, when a texture is transparent, it's RGB to be ignored, both sides of the blending, so if I could I would compute the rgb with a similar equation:
source_rgb * source_alpha / total_alpha + dest_rgb * dest_alpha / total_alpha
where total_alpha is the sums of the alphas. Which doesn't seem supported.
Is there something that can help me with minimum headache? I'm open to suggestions, from fixes, to rewriting everything to using a library that already does it.
The full source code is available here if you are interested. Please let me know if you need relevant extracts.
EDIT: I did try before to remove the GL_ONE, GL_ONE from my alpha blending and simply use (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) for RGBA, but the results weren't great either. I get similar results.
Solved the problem using premultiplication as suggested.

First of all, total_alpha isn't the sum of the alphas but rather the following:
total_alpha = 1 - (1 - source_alpha)*(1 - dest_alpha)
As you noted correctly, OpenGL doesn't support that final division by total_alpha. But it doesn't need to. All you need is to switch into thinking and working in terms of pre-multiplied alpha. With that the simple
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)
does the right thing.

Related

Xamarin IOS Opentk - BlendFunc with transparent textures

I'm trying to render some label textures with transparent background using OpenTK in Xamarin. At first the labels seemed display properly (see picture 1) but when the view rotated, the some label background are not transparent any more (see picture 2).
The enabled BlendFunc is GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha).
My question is how can I always have labels transparency on despite of their positions?
The same code and shader can run properly on Android Devices by the way.
Ah yes, the good old transparency problem. Unfortunately this is one that a graphics programmer has to solve on his own.
For just a few labels, the most straight foward solution is likely to sort your labels by z-depth and then render them from farthest to closest. You'd probably need to do some matrix math on that label position to adjust for viewport rotation.
For the 3d game I'm working on I have chosen to implement the order-independent transparency method called WBOIT by Morgan McGuire, which is fairly simple to implement and yields relatively good results.

Implement Displacement Mapping: Gaps along seams

I am implementing Displacement mapping using DirectX 11 with its new tessellation stages.
Diffuse map and displacement map are generated by xNormal.
The result after I applied displacement mapping is cracked so badly.
http://imgur.com/a/OT2tt#0
Then I realized the values in the texture along seams are not the same/continuous, so I just use diffuse texture as displacement map, and the diffuse color is all-red.
http://imgur.com/a/OT2tt#1
The result is better but still 1-pixel gap along the seams.
http://imgur.com/a/OT2tt#2
http://imgur.com/a/OT2tt#3
http://imgur.com/a/OT2tt#4
I was confused by the little gap, so I enlarged the colored-part in the texture using MS painter, then the gap disappeared!
http://imgur.com/a/OT2tt#6
http://imgur.com/a/OT2tt#7
Now I just don't understand where the problem is.
Even if the value along seams from different part of the texture is the same (red in this case),
there are still gaps on the result model.
I tried all sample filter here(MSDN) but nothing helps.
What causes the gap? It will be better if the problem can be solved by just modifying texture instead of changing my code.
You must implement watertight seam filtering :D
If not, those gaps appear because normals are different across UV seams.
Pretty obvious.

WebGL / Box2D drawing issue: gap between bodies

I'm still a graphics programming novice and bet the following problem is just a matter of wrong configuration.
i am creating a game using webgl for graphics and box2dweb for physics. unfortunately the drawing shows gaps between the physical bodies (left is my actual rendering, right is a rendering using box2dweb's debug-drawing in another canvas):
both box2d and webgl use the same coordinate-system and sizes for the boxes. there is no conversion. the red boxes are actually textures, though this doesn't make a difference. the red boxes are dynamic bodies, the green boxes are static bodies.
obviously i can't just resize graphics or physics. if i made the graphics bigger, the green boxes would overlap, if made physics smaller there will be physics-gaps.
here is another example:
also, sometimes, there there is no gap just like in the following (just moved the physic-bodies a little on the right)
the black boxes are just color-drawn (no textures). looking at the previous image, i guess it has to do with converting the floating-world-coordinates to screen-pixel-coordinates, but i have no idea what the option for fixing this would be.
Thanks a lot for the help
[Update]
It is an ortographic projection matrix, that I am initializing in the following way:
mat4.ortho(-this.vpWidth * this.zoom, this.vpWidth * this.zoom, -this.vpHeight * this.zoom, this.vpHeight * this.zoom, 0.1, 100.0, this.pMatrix);
vpWidth and vpHeight are the canvas-dimensions (640 * 480). the projection matrix is passed to the vertex-shader and multiplied with the model-view-matrix and the vertex-position. i played around with the zoom-factor. the more i zoom in the bigger the gaps are.
[Update 2]
okay. i investigated this a little more. bad zeppelin had a good hint. box2d has gaps between bodies to avoid tunneling. though this is not the complete explanation. i looked at the debug-draw-code - it is not resizing anything. i made a little test, zooming in both in webgl and for the debug draw with the following result:
with 10-times-zoom both have the same gap, but in "normal" zoom webgl is drawing bigger gaps than canvas 2d. what could be the explanation? my guess is anti-aliasing, which is enabled for canvas 2d, but not for webgl (i am using firefox - guess i'll make a chrome test later today to see what happens)
If you check the box2d manual, it says on the chapter 4.2 that the box2d engine keeps the polygons slightly separated to avoid tunneling. Checking the Box2d debug drawing code to see how they translate from box2d to draw coordinates might be a good idea to see how you could do the same in your app.
With the matrix you provided, you'll be creating a viewport that has a "virtual size" of twice your canvas dimensions. If you are trying for a pixel-for-pixel match, try this (with a zoom of 1.0):
mat4.ortho(-(this.vpWidth/2) * this.zoom, (this.vpWidth/2) * this.zoom, -(this.vpHeight/2) * this.zoom, (this.vpHeight/2) * this.zoom, 0.1, 100.0, this.pMatrix);
That way your 640*480 canvas will have extents of [-320,-240] to [320*240], which gives you 640*480 units total. Note that this will probably not eliminate the gaps entirely, since as bad zeppelin noted box2d puts them there intentionally, but it should make them less visible.
Another option to reduce the visible gaps is to draw your geometry scaled up just a bit from the physical representation, so that it displays with an extra pixel or two around the edges. The worst that may happen is that the geometry might appear to overlap just a bit, but it's up to you to determine if that's a more objectionable artifact than the gaps.

Turning off antialiasing in Löve2D

I'm using Löve2D for writing a small game. Löve2D is an open source game engine for Lua. The problem I'm encountering is that some antialias filter is automatically applied to your sprites when you draw it at non-integer positions.
love.graphics.draw( sprite, x, y )
So when x or y is not round (for example, x=100.24), the sprite appears blurred. The same happens when the sprite size is not even, because (x,y) points to the center of the sprite. For example, a sprite which is 31x30 big will appear blurred again, because its pixels are painted in non-integer positions.
Since I am using pixel art, I want to avoid this all the way, otherwise the art is destroyed by this effect. The workaround I am using so far is to force the coordinates to be round by littering the code with calls to math.floor(), and forcing all the sprites to have even sizes by adding a row or column of transparent pixels with the paint program, if needed.
Is there some command to deactivate the antialiasing I can call at program startup?
If you turn off anti-aliasing you will just get aliasing, hence the name! Why are you drawing at non-integral positions, and what do you want it to do about those fractional parts? (Round them to the nearest value? Truncate them? What about if they're negative?)
Personally I would leave the low level graphics alone and alter your code to use accessors for x and y that perform the rounding or truncation that you require. This guarantees your pixel art ends up drawn on integer boundaries while keeping the anti-aliasing on that you might need later.
Another possible work around may be to use math.floor() to round your integers as a cheap workaround.
In case anyone is interested, I've been asking in other places and found out that what I am asking is already requested as feature: http://love2d.org/forum/tracker.php?p=2&t=7
So, the current version of Löve that I'm using (0.5.0) still doesn't allow to disable the antialias filter, but the feature is already in the SVN version of the engine.
you can turn off anti-aliasing by adding love.graphics.setDefaultFilter("nearest", "nearest", 1) to love.load()

RGB to monochrome conversion

How do I convert the RGB values of a pixel to a single monochrome value?
I found one possible solution in the Color FAQ. The luminance component Y (from the CIE XYZ system) captures what is most perceived by humans as color in one channel. So, use those coefficients:
mono = (0.2125 * color.r) + (0.7154 * color.g) + (0.0721 * color.b);
This MSDN article uses (0.299 * color.R + 0.587 * color.G + 0.114 * color.B);
This Wikipedia article uses (0.3* color.R + 0.59 * color.G + 0.11 * color.B);
This depends on what your motivations are. If you just want to turn an arbitrary image to grayscale and have it look pretty good, the conversions in other answers to this question will do.
If you are converting color photographs to black and white, the process can be both very complicated and subjective, requiring specific tweaking for each image. For an idea what might be involved, take a look at this tutorial from Adobe for Photoshop.
Replicating this in code would be fairly involved, and would still require user intervention to get the resulting image aesthetically "perfect" (whatever that means!).
As mentioned also, a grayscale translation (note that monochromatic images need not to be in grayscale) from an RGB-triplet is subject to taste.
For example, you could cheat, extract only the blue component, by simply throwing the red and green components away, and copying the blue value in their stead. Another simple and generally ok solution would be to take the average of the pixel's RGB-triplet and use that value in all three components.
The fact that there's a considerable market for professional and not-very-cheap-at-all-no-sirree grayscale/monochrome converter plugins for Photoshop alone, tells that the conversion is just as simple or complex as you wish.
The logic behind converting any RGB based picture to monochrome can is not a trivial linear transformation. In my opinion such a problem is better addressed by "Color Segmentation" techniques. You could achieve "Color segmentation" by k-means clustering.
See reference example from MathWorks site.
https://www.mathworks.com/examples/image/mw/images-ex71219044-color-based-segmentation-using-k-means-clustering
Original picture in colours.
After converting to monochrome using k-means clustering
How does this work?
Collect all pixel values from entire image. From an image which is W pixels wide and H pixels high, you will get W *H color values. Now, using k-means algorithm create 2 clusters (or bins) and throw the colours into the appropriate "bins". The 2 clusters represent your black and white shades.
Youtube video demonstrating image segmentation using k-means?
https://www.youtube.com/watch?v=yR7k19YBqiw
Challenges with this method
The k-means clustering algorithm is susceptible to outliers. A few random pixels with a color whose RGB distance is far away from the rest of the crowd could easily skew the centroids to produce unexpected results.
Just to point out in the self-selected answer, you have to LINEARIZE the sRGB values before you can apply the coefficients. This means removing the transfer curve.
To remove the power curve, divide the 8 bit R G and B channels by 255.0, then either use the sRGB piecewise transform, which is recommended for image procesing, OR you can cheat and raise each channel to the power of 2.2.
Only after linearizing can you apply the coefficients shown, (which also are not exactly correct in the selected answer).
The standard is 0.2126 0.7152 and 0.0722. Multiply each channel by its coefficient and sum them together for Y, the luminance. Then re-apply the gamma to Y and multiply by 255, then copy to all three channels, and boom you have a greyscale (monochrome) image.
Here it is all at once in one simple line:
// Andy's Easy Greyscale in one line.
// Send it sR sG sB channels as 8 bit ints, and
// it returns three channels sRgrey sGgrey sBgrey
// as 8 bit ints that display glorious grey.
sRgrey = sGgrey = sBgrey = Math.min(Math.pow((Math.pow(sR/255.0,2.2)*0.2126+Math.pow(sG/255.0,2.2)*0.7152+Math.pow(sB/255.0,2.2)*0.0722),0.454545)*255),255);
And that's it. Unless you have to parse hex strings....

Resources