From what I have read so far, commands in a single command buffer can be out of order without explicit synchronization. Here is what the vulkan spec says (https://vulkan.lunarg.com/doc/view/1.0.26.0/linux/vkspec.chunked/ch02s02.html#fundamentals-queueoperation-commandorder)
"The work involved in performing action commands is often allowed to overlap or to be reordered, but doing so must not alter the state to be used by each action command. In general, action commands are those commands that alter framebuffer attachments, read/write buffer or image memory, or write to query pools."
Edit: At first I thought that set state commands would act as some kind of barrier to ensure that draw commands are in order. I have already been explained that this is wrong. So I look at this example of bloom effect in Vulkan
https://github.com/SaschaWillems/Vulkan/blob/master/examples/bloom/bloom.cpp
/*First render pass: Render glow parts of the model (separate mesh) to an offscreen frame buffer*/
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayouts.scene, 0, 1, &descriptorSets.scene, 0, NULL);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.glowPass);
VkDeviceSize offsets[1] = { 0 };
vkCmdBindVertexBuffers(drawCmdBuffers[i], 0, 1, &models.ufoGlow.vertices.buffer, offsets);
vkCmdBindIndexBuffer(drawCmdBuffers[i], models.ufoGlow.indices.buffer, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(drawCmdBuffers[i], models.ufoGlow.indexCount, 1, 0, 0, 0);
vkCmdEndRenderPass(drawCmdBuffers[i]);
/*Second render pass: Vertical blur
Render contents of the first pass into a second framebuffer and apply a vertical blur
This is the first blur pass, the horizontal blur is applied when rendering on top of the scene*/
renderPassBeginInfo.framebuffer = offscreenPass.framebuffers[1].framebuffer;
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayouts.blur, 0, 1, &descriptorSets.blurVert, 0, NULL);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.blurVert);
vkCmdDraw(drawCmdBuffers[i], 3, 1, 0, 0);
vkCmdEndRenderPass(drawCmdBuffers[i]);
Here are the 2 subpass dependencies used by both render passes
dependencies[0].srcSubpass = VK_SUBPASS_EXTERNAL;
dependencies[0].dstSubpass = 0;
dependencies[0].srcStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[0].dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
dependencies[0].srcAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[0].dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
dependencies[0].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT;
dependencies[1].srcSubpass = 0;
dependencies[1].dstSubpass = VK_SUBPASS_EXTERNAL;
dependencies[1].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
dependencies[1].dstStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[1].srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
dependencies[1].dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[1].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT;
My understanding then becomes that these 2 subpass dependencies are responsible for the execution ordering of the render pass but I'm not sure how yet since I'm still fuzzy about subpass dependency. If I'm correct in my understanding can you explain to me why the subpass dependency helps order the draw command? If I'm wrong then what is ensuring the draw command order?
So what is happening is that somehing is rendered to img1 (as color attachment). Then
img1 is sampled, and stuff is written to img2 (as color attachment). Then img2 is sampled and written to a swapchain image.
dependencies[0].srcSubpass = VK_SUBPASS_EXTERNAL;
dependencies[0].srcStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[0].srcAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[0].dstSubpass = 0;
dependencies[0].dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
dependencies[0].dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
For the first and second render pass instance, this possibly blocks some previous sampling of the Resource. Probably from the previous frame. Assuming there is not some other sync between subsequent frames.
dependencies[1].srcSubpass = 0;
dependencies[1].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
dependencies[1].srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
dependencies[1].dstSubpass = VK_SUBPASS_EXTERNAL;
dependencies[1].dstStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[1].dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
Now the color attachment is written in VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT and also more importantly (and conveniently), the Store Operation happens in this same stage for color attachments. It is also always VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT irregardless whether it is STORE or DONT_CARE.
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT and VK_ACCESS_SHADER_READ_BIT is again a good match for image sampling (in the fragment shader).
So this means that img1 is fully rendered and stored from the first render pass instance, before it is sampled by the second render pass instance.
And it also means img2 is fully rendered and stored from the second render pass instance, before it is sampled by the third render pass instance.
This is an advanced sample, and you are somewhat expected to already understand synchronization.
State commands are not subject to synchronization. They only change the context of subsequent action commands as soon as they are introduced and typically last until the end of the command buffer, or until the state is changed again.
Subpass dependencies and barriers define a dependency in this way: src synchronization scope finishes execution before dst synchronization scope begins execution.
Subpass dependencies and barriers practically the same. Barriers are typically used outside a render pass, while a subpass dependencies inside it. Subpasses are unordered to each other, so subpass dependencies additionally have *Subpass parameter, and synchronization scopes are limited to only that stated subpass. VK_SUBPASS_EXTERNAL means that stuff before vkCmdBeginRenderPass \ after vkCmdEndRenderPass is part of the synchronization scope.
It takes time to understand the synchronization system, and I cannot properly cover it here. I have little bit more extended answer on barriers at Using pipeline barriers instead of semaphores, otherwisely there is also internet full of resources.
Related
I m a new in vtk and i want to know what is the role of Update here , we can just use vtkNew sphereSource; and its will work
vtkNew<vtkSphereSource> sphereSource;
sphereSource->Update();
It asks the algorithm to do the actual computation (see the doc). It is because VTK is lazy evaluated, so output is computed only when required. It allows you to change algorithm parameters without triggering unneeded computations.
Example:
vtkNew<vtkSphereSource> sphereSource;
sphereSource->Update(); // compute sphere with default
vtkPolyData* sphere = sphereSource->GetOutput();
sphereSource->SetThetaResolution(100); // change from default. Does not trigger any computation.
vtkPolyData* oldSphere = sphereSource->GetOutput(); // old output, source still not recomputed
sphereSource->Update(); // compute new sphere
vtkPolyData* sphere100 = sphereSource->GetOutput(); // new output
I would have a question to ask, hoping for your advice. In my project I'm working on the different entities that will be present, one of them is the "TreeEntity". It is visibly composed of four parts. "Leaves, Shadow, Stump, Trunk". All four parts have only one id and are randomly chosen when a new "TreeEntity" is called. In addition, leaves and shadow each have four different states, based on the age of the tree. The shadow will have to rotate around the trunk based on the time. Now, since the engine of my project would like to create it as universal as possible, so that it can also be used in other projects or other people, I am facing a dead end.
I have this result at the moment (premise, all the images used are for testing purposes only):
As can be seen, the shadow is very detached from the trunk. This, theoretically, would be possible to fix it, modifying the image used:
Or write a huge dictionary with all the precise anchors written for each image that make up leaves, shadow, stump and trunk.
But doing so would be detrimental to my goal. So, the advice I would like to ask you, is whether in pyglet or otherwise, regardless of the shadow used (which in this case is random) is it possible to choose an anchor point that places it in the right place? Or do you have to modify the image or create a dictionary?
My question arises from the fact that I have seen several times images for example of shadows or something else that is like the one I inserted here, but in that case the shadows were in their correct place. The same game from here I took the pictures to test and I'm inspired by one or two mechanics has the shadows placed in the right place. So I don't know which way to go sincerely.
At the moment, all four parts have a universal anchor dictated by a small dictionary:
__statdict = {
"leaves": {
"cell": (3, 1),
"anchor_x": "center",
"anchor_y": "bottom"
},
"shadow": {
"cell": (4, 1),
"anchor_x": "left",
"anchor_y": "bottom"
},
"stump": {
"cell": (1, 1),
"anchor_x": "center",
"anchor_y": "bottom"
},
"trunk": {
"cell": (1, 1),
"anchor_x": "center",
"anchor_y": "bottom"
}
}
And when loading resources, anchor is assigned:
if self.__statdict[name]["anchor_x"] == "center":
ancx = texture.width // 2
elif self.__statdict[name]["anchor_x"] == "right":
ancx = texture.width
else:
ancx = 0
if self.__statdict[name]["anchor_y"] == "center":
ancy = texture.height // 2
elif self.__statdict[name]["anchor_y"] == "top":
ancy = texture.height
else:
ancy = 0
texture.anchor_x = ancx
texture.anchor_y = ancy
I have tried to be as complete as possible. Thanks for your help. Maybe maybe I'm having problems for nothing and the solution is simpler than I thought, so I apologize if this were the case.
Update (1):
Is it possible, via pyglet or a module to accompany it, given the direction of a sprite (in this case on the right), to know what is the position of the first pixel that is not an alpha? Something like this: "The direction of the SpriteShadow is to the right. The first pixel that has an rbg value with alpha equal to 255 is ay = 20 (relative to the width of the sprite). Then I move the SpriteShadow to the left with respect to the general anchor of 20 pixels "
I'm creating a little program that creates image patterns like these using line segments that rotate around each other:
image from Engare Steam store page for reference
How do I tell Godot to create instances of the Polygon2D scene I'm using as line segments with origins on Position2D nodes that exist as children in the Polygon2D scene? Here is a sample of my code:
const SHAPE_MASTER = preload("res://SpinningBar.tscn")
...
func bar_Maker(bar_num, parent_node):
for i in range(bar_num):
var GrabbedInstance = SHAPE_MASTER.instance()
parent_node.add_child(GrabbedInstance)
bar_Maker(bar_num - 1, $Polygon2D/RotPoint)
...
func _physics_process(delta):
...
if Input.is_action_just_pressed("Switch Item"): # from an older version of the program, just bound to ctrl
bar_Maker(segment_count, BarParent)
bar_num is the number of bars to instance, set elsewhere (range 1-6).
parent_node in the main scene is just a Node2D called BarParent.
The SpinningBar.tscn I'm instancing as GrabbedInstance has a Position2D node called "RotPoint" at the opposite end of the segment from the object's origin. This is the point I would like successive segments to rotate around (and also put particle emitters here to trace lines, but that is a trivial issue once this first one is answered).
Running as-is and creating more than 1 line segment returns "Attempt to call function 'add_child' in base 'null instance' on a null instance." Clearly I am adding the second (and later) segments incorrectly, so I know it's related to how I'm performing recursion/selecting new parents for segments 1+ node deep.
Bang your head against the wall and you will find the answer:
func bar_Maker(bar_num, parent_node):
for i in range(bar_num):
var GrabbedInstance = SHAPE_MASTER.instance()
parent_node.add_child(GrabbedInstance)
var new_children = GrabbedInstance.get_children()
bar_Maker(bar_num - 1, new_children[0])
If someone is aware of a more elegant way to do this please inform me and future readers. o7
Is there a way to dynamically build an SVG using sprites in amcharts 4?
Example: screenhot
There are 20 different types which are represented by colors.
Each pin can contain a multitude of types.
So an example can be that a pin has 3 types and will consist out of 3 colors.
I have an SVG path which is a circle.
With regular JS and SVG i can create a path for each type and change the stroke color, strokedasharray and strokedashoffset.
This results in the nice circle with 3 colors.
However this seems to be impossible to do with amcharts 4.
For starters, strokedashoffset is not even a supported property for a sprite. Why would you bother supporting strokedasharray and then ignore strokedashoffet?!
The second problem is finding out how to pass data to the sprite.
This is an example of a data object I pass to the mapImageSeries class.
[{
amount: 3,
client: undefined,
colorsArr: {0: "#FFB783", 1: "#FD9797", 2: "#77A538"},
dashArray: "500,1000",
dashOffset: 1500,
divided: 500,
global: true,
groupId: "minZoom-1",
hcenter: "middle",
id: "250",
latitude: 50.53398,
legendNr: 8,
longitude: 9.68581,
name: "Fulda",
offsetsArr: {0: 0, 1: 500, 2: 1000},
scale: 0.5,
title: "Fulda",
typeIds: (3) ["4", "18", "21"],
typeMarker: " type-21 type-18 type-4",
vcenter: "bottom",
zoomLevel: 5
}]
It seems impossible to pass the colors down to the sprite.
var svgPath = 'M291,530C159,530,52,423,52,291S159,52,291,52s239,107,239,239c0,131.5-106.3,238.3-237.7,239'
var mainPin1 = single.createChild(am4core.Sprite)
mainPin1.strokeWidth = 100
mainPin1.fill = am4core.color('#fff')
mainPin1.stroke = am4core.color('#ff0000')
mainPin1.propertyFields.strokeDasharray = 'dashArray'
mainPin1.propertyFields.strokeDashoffset = 'dashOffset'
mainPin1.path = svgPath
mainPin1.scale = 0.04
mainPin1.propertyFields.horizontalCenter = 'hcenter'
mainPin1.propertyFields.verticalCenter = 'vbottom'
With what you've provided, simulating your custom SVGs is beyond the scope of what can be answered, so I'll try tackling:
applying stroke-dashoffset despite lack of innate library support. (I see you've added a feature request on GitHub for it, so why the library doesn't include it, when/if it will, can be left for discussion there.)
passing data/colors to the Sprite
For both we're going to have to wait until the instances of Sprites are ready along with their data. Presuming your single variable is a reference to a MapImageSeries.mapImages.template, we can set up an "inited" event like so:
single.events.once("inited", function(event){
// ...
});
Our data and data placeholders don't really support nested arrays/objects in general, since your colors are nested within a field, we can find them via:
event.target.dataItem.dataContext.colorsArr
You can then set the fill and stroke on the Sprite or event.target.children.getIndex(0) manually from there (in my demo below, the index will be 1 because mainPin1 is not the first/only child created on the MapImage template).
As for stroke-dashoffset, you can access the actual rendered SVGElement via sprite.group.node and just use setAttribute.
I forked our map image demo from our map image data guide and added all the above to it here:
https://codepen.io/team/amcharts/pen/6a3d87ff3bdee7b85000fe775af9e583
I've been converting my own personal OGLES 2.0 framework to take advantage of the functionality added by the new iOS 5 framework GLKit.
After pleasing results, I now wish to implement the colour-based picking mechanism described here. For this, you must access the back buffer to retrieve a touched pixel RGBA value, which is then used as a unique identifier for a vertex/primitive/display object. Of course, this requires temporary unique coloring of all vertices/primitives/display objects.
I have two questions, and I'd be very grateful for assistance with either:
I have access to a GLKViewController, GLKView, CAEAGLLayer (of the GLKView) and an EAGLContext. I also have access to all OGLES 2.0
buffer related commands. How do I combine these to identify the color
of a pixel in the EAGLContext I'm tapping on-screen?
Given that I'm using Vertex Buffer Objects to do my rendering, is there a neat way to override the colour provided to my vertex shader
which firstly doesn't involve modifying buffered vertex (colour)
attributes, and secondly doesn't involve the addition of an IF
statement into the vertex shader?
I assume the answer to (2) is "no", but for reasons of performance and non-arduous code revamping I thought it wise to check with someone more experienced.
Any suggestions would be gratefully received. Thank you for your time
UPDATE
Well I now know how to read pixel data from the active frame buffer using glReadPixels. So I guess I just have to do the special "unique colours" render to the back buffer, briefly switch to it and read pixels, then switch back. This will inevitably create a visual flicker, but I guess it's the easiest way; certainly quicker (and more sensible) than creating a CGImageContextRef from a screen snapshot and analyzing that way.
Still, any tips as regards the back buffer would be much appreciated.
Well, I've worked out exactly how to do this as concisely as possible. Below I explain how to achieve this and list all the code required :)
In order to allow touch interaction to select a pixel, first add a UITapGestureRecognizer to your GLKViewController subclass (assuming you want tap-to-select-pixel), with the following target method inside that class. You must make your GLKViewController subclass a UIGestureRecognizerDelegate:
#interface GLViewController : GLKViewController <GLKViewDelegate, UIGestureRecognizerDelegate>
After instantiating your gesture recognizer, add it to the view property (which in GLKViewController is actually a GLKView):
// Inside GLKViewController subclass init/awakeFromNib:
[[self view] addGestureRecognizer:[self tapRecognizer]];
[[self tapRecognizer] setDelegate:self];
Set the target action for your gesture recognizer; you can do this when creating it using a particular init... however I created mine using Storyboard (aka "the new Interface Builder in Xcode 4.2") and wired it up that way.
Anyway, here's my target action for the tap gesture recognizer:
-(IBAction)onTapGesture:(UIGestureRecognizer*)recognizer {
const CGPoint loc = [recognizer locationInView:[self view]];
[self pickAtX:loc.x Y:loc.y];
}
The pick method called in there is one I've defined inside my GLKViewController subclass:
-(void)pickAtX:(GLuint)x Y:(GLuint)y {
GLKView *glkView = (GLKView*)[self view];
UIImage *snapshot = [glkView snapshot];
[snapshot pickPixelAtX:x Y:y];
}
This takes advantage of a handy new method snapshot that Apple kindly included in GLKView to produce a UIImage from the underlying EAGLContext.
What's important to note is a comment in the snapshot API documentation, which states:
This method should be called whenever your application explicitly
needs the contents of the view; never attempt to directly read the
contents of the underlying framebuffer using OpenGL ES functions.
This gave me a clue as to why my earlier attempts to invoke glReadPixels in attempts to access pixel data generated an EXC_BAD_ACCESS, and the indicator that sent me down the right path instead.
You'll notice in my pickAtX:Y: method defined a moment ago I call a pickPixelAtX:Y: on the UIImage. This is a method I added to UIImage in a custom category:
#interface UIImage (NDBExtensions)
-(void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y;
#end
Here is the implementation; it's the final code listing required. The code came from this question and has been amended according to the answer received there:
#implementation UIImage (NDBExtensions)
- (void)pickPixelAtX:(NSUInteger)x Y:(NSUInteger)y {
CGImageRef cgImage = [self CGImage];
size_t width = CGImageGetWidth(cgImage);
size_t height = CGImageGetHeight(cgImage);
if ((x < width) && (y < height))
{
CGDataProviderRef provider = CGImageGetDataProvider(cgImage);
CFDataRef bitmapData = CGDataProviderCopyData(provider);
const UInt8* data = CFDataGetBytePtr(bitmapData);
size_t offset = ((width * y) + x) * 4;
UInt8 b = data[offset+0];
UInt8 g = data[offset+1];
UInt8 r = data[offset+2];
UInt8 a = data[offset+3];
CFRelease(bitmapData);
NSLog(#"R:%i G:%i B:%i A:%i",r,g,b,a);
}
}
#end
I originally tried some related code found in an Apple API doc entitled: "Getting the pixel data from a CGImage context" which required 2 method definitions instead of this 1, but much more code is required and there is data of type void * for which I was unable to implement the correct interpretation.
That's it! Add this code to your project, then upon tapping a pixel it will output it in the form:
R:24 G:46 B:244 A:255
Of course, you should write some means of extracting those RGBA int values (which will be in the range 0 - 255) and using them however you want. One approach is to return a UIColor from the above method, instantiated like so:
UIColor *color = [UIColor colorWithRed:red/255.0f green:green/255.0f blue:blue/255.0f alpha:alpha/255.0f];