Clear all Layers with vkCmdBeginRenderPass (Vulkan, Layered rendering) - graphics

I have a framebuffer with one color attachment, which is a cube map with 6 layers. I try to use layered rendering with the geometry shader. Rendering of a simple triangle to all layers works. But I am not sure how to clear all layers with vkCmdBeginRenderPass.
vkCmdBeginRenderPass supports pClearValues and clearValueCount but I can not specify the number of layers. So only the first layer is cleared. Setting clearValueCount to 6 and giving 6 clear values also does not help.
I saw that vkCmdClearAttachments seems to allow to specify the layers.
Is vkCmdClearAttachments the only way, or did I miss something? Is there maybe a reason that vkCmdBeginRenderPass only clears the first layer although rendering seems to render to all layers?

clearValueCount refers to the number of attachments to clear (with regards to their clearOp), not the layers of a framebuffer.
The number of layers to be cleared at the start of a renderpass (if clearOp is set to clear) for a framebuffer is specified via the layerCount of it's imageView's subresource.

Related

Apply masking on selective image regions using opencv python

I have an ML model that I am using to create a mask to separate background and object. The problem is that the model is not that accurate and we still get regions of background around the edges.
The background could be any color but it is not uniform as you can see in the image.
This is the model output.
I was wondering if there is a way I could apply masking only around the edges so it doesn't affect other parts of the object which have been extracted properly. Basically I only want to trim down these edges which contain the background so any solutions using python are appreciated.
I'm really sorry for not being at the liberty to share the code but I'm only looking for ideas that I can implement to solve this problem.
You can use binary erosion or dilation to "grow" the mask so that it covers the edge
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_dilation.html
https://docs.scipy.org/doc/scipy-0.14.0/reference/generated/scipy.ndimage.morphology.binary_erosion.html
as for "apply masking only around the edges" (this is not the same mask I was writing about above), you can flag pixels that are close to the edge by iteration over a mask and finding where there is a 0 that has a neighbouring 1 or vice versa.

How do 3d engines decide where polygons go in a model?

I am trying to build my own 3d engine from a 2d one. so far everything works fine but its very inefficient due to the fact that the wire frame model has lines between every point on the shape. I've been doing some research but haven't been able to find anything regarding what dictates where polygons go for the most optimal rendering.
Here is what a cube looks like in my program:
Is there some mathematical way to remove all the extra geometry?
Any advice really helps, thanks.
Ok so after longer than I'd like to to admit I figured out that you don't need to order faces by their z coordinate, instead just take the surface normal of the shape and render only render it if it's above a value (most of the time 0) (also you'll want to use premade triangles from object files instead of assigning them to the faces yourself)

Overlapping opaque shapes drawn to a graphics object within a translucent container don't show the correct alpha in the overlapping region

I have a PIXI.Graphics inside a PIXI.Container (along with some other stuff, including a mask, and a border). The graphics object is being used to draw various polygons. The alpha property of the Container is set to 0.5. Here is the result:
The bright square is the overlap between two polygons. It seems that even though both polygons were drawn to the same opaque graphics object, it's as though they are separate objects with their own alpha channels.
Is there any way to merge all of the polygons together so that the resulting graphics will have uniform alpha despite some overlapping polygons?
Pixi version is 4.7.3.
You can easily use AlphaFilter to achieve this. See this thread: https://github.com/pixijs/pixi.js/issues/4334
const colorMatrix = new filters.AlphaFilter();
colorMatrix.alpha = 0.5;
container.filters = [colorMatrix];
One solution to this problem in general is to draw all the necessary geometry, then set cacheAsBitmap to true on the Graphics object.
cacheAsBitmap is great for graphics that don't change often, and another benefit to using it is that it speeds up rendering.
Unfortunately, it appears that there could possibly be a bug using cacheAsBitmap with objects that use parent layers or masks which cause all the graphics to disappear if either is set.
In my particular situation, this does not help me because I need masking. Hopefully it helps someone else though.
Edit
The above solution works if you put the graphics inside a container, and apply the mask to the container. I found this out by complete accident while messing around.

Pygame sprites always on top example?

I was wondering if someone could give me simple example of how to always draw sprites in pygame on --the top layer-- whilst an animation is being blitted on screen?
The senario I'm trying to solve is having an animated sprite (eg a man walking on the spot) and various other objects passing him, whilst he is still animated.
My solution so far has the animation layering on top of the "passing objects" which is not what I want.
Thanks in advance.
I think you've partially answered your own question here.
It's a matter of organizing what gets drawn first. Some side-scrolling 2D games use a "layer" solution, in which there is a background layer, a middleground layer, and a foreground layer, and the drawing system renders one layer after another.
I've also seen Pokémon top-down style games simply sort the sprites by their vertical position, so sprites "nearest" to the camera are drawn last, and thus on top of the other sprites.
See how the current implementation of Scene2D in libGDX gives each Actor a z-index property which can later be used to organize Actors into layers.
Just draw your sprites with a LayeredUpdates group:
class pygame.sprite.LayeredUpdates
LayeredUpdates is a sprite group that handles layers and draws like OrderedUpdates.
LayeredUpdates(*spites, **kwargs) -> LayeredUpdates
You can set the default layer through kwargs using ‘default_layer’ and an integer for the layer. The default layer is 0.
If the sprite you add has an attribute layer then that layer will be used. If the **kwarg contains ‘layer’ then the sprites passed will be added to that layer (overriding the sprite.layer attribute). If neither sprite has attribute layer nor **kwarg then the default layer is used to add the sprites.
and give your sprites the correct layer value.

Supporting a large number of layers in image editor?

I'm writing a photoshop-like program for a mobile device and I want to support the use of layers. At most, I can store about 7 bitmaps in memory at a time. I'm trying to see if I can come up with a way of supporting lots of layers (e.g. 10 or 20) while not using much memory.
My current idea is:
Use one bitmap as the active layer that the user can currently paint on and manipulate.
Use one bitmap that stores a flattened version of all the layers below the active layer.
Use one bitmap that stores a flattened version of all the layers above the active layer.
When a layer is not the active layer, I can write it to disk and remove it from memory. When the user switches active layer, I then retrieve the layer from disk and recreate the flattened images.
This idea appears sound if each layer only has opacity settings, but I don't think it will work if layers can have different blending modes like screen and multiply. The flattened bottom layers would work fine but it seems as if I would need to rerender all the top layers again if one of these used a blend mode and the active layer was changed.
What approach can I use? I've seen various paint programs supporting 100 and more layers so there must be some trick to it.
Well, I think you've already got a reasonable approach for the case where the layers just have simple opacity, but I can see the problem if they had different blending modes.
One suggestion could be to chop the image into sub-blocks, e.g. 32x32 in size and only re-blend the ones where something has changed. You could have a sort of cache of sub-blocks in main memory so that if the user is editing only a small region, you've have the data you needed most of the time. It would be complex, but you could still keep contiguous layers that only need opacity blending to potentially improve performance.
I seem to remember that Photoshop at least used to do this - they might have had a hierarchy, but the image was split blocks.

Resources