I'm having a problem when rendering even simple shapes with partial opacity to QGLFrameBufferObjects in Qt.
I have reduced the problem down to this:
When I render a simple quad to a QGLFrameBufferObject with color set to (1,0,0,.5), and then blit that to the screen, I get a result that is way too light a red for 50% opacity. If I draw the same quad with the same color (same code, in fact) directly to the screen, I get the correct color value. If I render the quad with opacity == 1.0, then the results are the same...I get a full, deep red in both cases. I've confirmed that the color is really wrong in the buffer by dumping the buffer to disk directly with buffer.toImage().save("/tmp/blah.tif").
In both cases, I've cleared the output buffer to (1,1,1,1) before performing the operation.
Why are things I draw that are partially transparent coming out lighter when drawn to an offscreen buffer than if I draw them right to the screen? There must be some state that I have to set on the FBO or something, but I can't figure out what it is.
Alpha does not mean "transparent." It doesn't mean anything at all. It only takes up a meaning when you give it one. It only mean "transparent" when you set up a blend mode that uses alpha to control transparency. So if you didn't set up a blend mode that creates the effect of transparency, then alpha is just another color component that will be written exactly as is to the framebuffer.
Related
Just as the question title, I'm a bit confused with thoes stuff, especially viewport and render area. AFAIK, viewport is used in VS stage, while render area is used in FS stage, if viewport is small than render area, what will happen?
THanks.
The viewport specifies how the normalized device coordinates are transformed into the pixel coordinates of the framebuffer.
Scissor is the area where you can render, this is similar to viewport in that regard but changing the scissor rectangle doesn't affect the coordinates.
RenderArea is the area of the framebuffer that will be changed by the renderpass. This lets the implementation know that not the entire frame buffer will be changed and gives it opportunity to optimize by for example not including some tiles in a tile based architecture. It is the application's responsibility that no rendering happens outside that area, for example by making sure the scissor rects are always fully contained within the renderArea.
Framebuffer size and attachment size are related in that the attachments must be at least as large as the framebuffer.
if viewport is small than render area, what will happen?
Nothing special, the render commands will render within the viewport. The other way around (render area smaller than the viewport) will result in undefined values in the framebuffer.
I am programming a tool that has a custom canvas in it derived from Gtk::DrawingArea. The drawing does not happen directly on the DrawingArea. I have two Cairo::Surfaces. One surface is the Background (mostly an image) and the other surface is stuff that was drawn by the user.
When the user draws, I do not redraw the whole surface, I just set a clip on the Cairo context to only draw what is necessary. The Problem is, that when I have the clip set on the image background I can see a white frame around the clipped area. Is there a flag or a workaround, so the image would look good instead of having fuzzy frame ornaments around it?
When using an feDisplacementMap svg filter, my smooth svg lines are getting all jagged. I could probably render it large and then shrink it down, but isn't SVG supposed to be able to anti-alias?
Okay, so I figured out the answer to my own question: the filterRes attribute: http://www.w3.org/TR/SVG/filters.html#FilterElementFilterResAttribute
In my testing, on Chrome, increasing the filterRes slows things down pretty dramatically.
SVG filters process inputs at the pixel level, not the vector level. As far as an SVG filter is concerned, it's been handed a big rectangle of RGBA pixels to work with. Results from a displacement map can look pixelated because a filter has no idea where the edges that have been displaced are - it's all just pixels as far as it is concerned. (The old semi-transparent pixels that used to be the anti-aliasing have been displaced as well.) However, sometimes you can add another filter or two to solve any problem that this creates. Creative ways to solve this problem:
Take the post-displacement graphic, blur it with a radius of a few pixels then blend the blur back into the original graphic.
Take the post-displacement graphic, do a luminance to alpha conversion, then use that alpha map with a diffuse lighting effect to add a fake anti-alias lighting effect.
Use a convolvematrix with edge detection values to extract edges from the graphic, blur that result and blend it back into the source graphic.
Depending on your graphic, you might be able to use an erode or dilate filter, but that tends to produce boxy highlights and might not work. And of course, you can always tweak your input in SVG (using stroke effects) to "pre-antialias" your source graphic so the result doesn't look so odd.
Having some issues with smooth alpha gradients in texture files resulting in bad banding issues.
I have a 2D XNA WP7 game and I've come up with a fairly simple lighting system. I draw the areas that would be lit by the light in a separate RenderTarget2D, apply a sprite to dim the edges as you get further away from the light, then blend that final lighting image with the main image to make certain areas darker and lighter.
Here's what I've got so far:
As you can see, the banding is pretty bad. The alpha transparency is quite smooth in the source image, but whenever I draw the sprite, it gets these huge ugly steps between colors. Just to check I drew the spotlight mask straight onto the scene with normal alpha blending and I still got the banding.
Is there any way to preserve smooth alpha gradients when drawing sprites?
Is there any way to preserve smooth alpha gradients when drawing sprites?
No, you cannot. WP7 phones currently use 16 bit color range system. One pixes got: 5 red bits, 5 blue, 6 green (humans see a wider spectrum of green color).
Found out that with Mango, apps can now specify that they support 32bpp, and it will work on devices that support it!
For XNA, put this line at the top of OnNavigatedTo:
SharedGraphicsDeviceManager.Current.PreferredBackBufferFormat = SurfaceFormat.Color;
For Silverlight add BitsPerPixel="32" to the App element in WMAppManifest.xml.
I'm having a major issue which has been bugging me for a while now.
My problem is my game uses a deferred rendering engine which makes it very difficult to do alpha blending.
The only way I can think of solving this issue is to render the scene (including depth map, normal map and diffuse map) without any objects which have alphas.
Then for each polygon which has a texture with an alpha component, disable the z buffer and render it out including normals, depth and colour, and wherever alpha is '0' don't output anything to the depth, normal and colour buffer. Perform lighting calculations/other deferred effects on these two separate textures then combine the colour buffers using the depth map to check for which pixel is visible.
This idea would be extremely costly (not to mention has some severe short comings) to do so obviously should only be reserved for as few cases as possible, which makes rendering forest areas out of the question. However if there is no better solution I have one question.
When doing alpha blending with directx is there a shader/device state I can set which makes it so that I can avoid writing to the depth/normal/colour buffer when I want to? The issue is the pixel shader has to output to all its render targets specified, so if its set to output to the 3 render targets it must do it, which will override the previous colour value for that texel in the texture.
If there is no blend state which allows me to do this it would mean I would have to copy the normal, texture and depth map to keep the scene and then render to a new texture, depth and normal map then combine the two textures based on the alpha and depth values.
I guess really all I want toknow is if there is a simple sure-fire and possibly cheap way to render alphas in a deferred renderer?
A usual approach to draw transparent geometry in deferred renderer is just draw them in a separate pass, but using the usual forward rendering, not deferred rendering.