what is the role of Update( ) in VTK? - vtk

I m a new in vtk and i want to know what is the role of Update here , we can just use vtkNew sphereSource; and its will work
vtkNew<vtkSphereSource> sphereSource;
sphereSource->Update();

It asks the algorithm to do the actual computation (see the doc). It is because VTK is lazy evaluated, so output is computed only when required. It allows you to change algorithm parameters without triggering unneeded computations.
Example:
vtkNew<vtkSphereSource> sphereSource;
sphereSource->Update(); // compute sphere with default
vtkPolyData* sphere = sphereSource->GetOutput();
sphereSource->SetThetaResolution(100); // change from default. Does not trigger any computation.
vtkPolyData* oldSphere = sphereSource->GetOutput(); // old output, source still not recomputed
sphereSource->Update(); // compute new sphere
vtkPolyData* sphere100 = sphereSource->GetOutput(); // new output

Related

Frame graph architecture

Since no new shader can be created during runtime, the full set is known ahead at compile-time. Each shader must reference a "pass" in which it will used to render.
To avoid frame-spikes during runtime, I'd like to pre-create all pipeline objects during startup. And to create a pipieline, the number of outputs and the format of each output attachment must be known - either to create a VkRenderPass or to specify the outputs for the dynamic rendering feature.
However, I'd also like to use the frame graph concept (speech by Yuriy O'Donnell) which in turn builds a graph of render passes with input-output specification and dependencies between them. Some passes are conditionally created (e.g. debug passes), some passes might be dropped from the graph (after "compiling" it).
Additionally, I need to support the "write on top" feature, so instead of specifying a new output during the building of the render pass, I can simply say that the output of this pass will use an output from a previous pass - this is useful for adding alpha-blended rendering, for example.
How can I match the two separate section of the code? In other words, how can I define all render passes during initialization but also use a dynamic approach of building the frame graph each frame without repeating myself?
This is what I'd like to avoid (pseudo-code):
struct Pass1Def
{
output1 = ImageFormat::RGBA8;
output2 = ImageFormat::RGBA8;
// ...
outputs = // outputs in order (corresponds to location in shader)
};
void init()
{
for_each_shaders shader {
passDef = findPassDef(shader);
createPipeline(shader, passDef);
}
}
void render()
{
auto previousResource = someCondition ? passA.outputResource1 : passB.outputResource2;
graph.addPass(..., [&](PassBuilder& builder, Pass1Data& data) {
// error-prone: order of function calls matter (corresponds to location in shader)
// error-prone: use the same format defined in Pass1Def
data.outputResource1 = builder.create(... ImageFormat::RGBA8);
// error-prone: the format depends on the outputResource of a previous pass
// however the format must be (and was) specified in Pass1Def
data.outputResource2 = builder.write(previousResource);
});
}

Order of action commands using subpass dependency?

From what I have read so far, commands in a single command buffer can be out of order without explicit synchronization. Here is what the vulkan spec says (https://vulkan.lunarg.com/doc/view/1.0.26.0/linux/vkspec.chunked/ch02s02.html#fundamentals-queueoperation-commandorder)
"The work involved in performing action commands is often allowed to overlap or to be reordered, but doing so must not alter the state to be used by each action command. In general, action commands are those commands that alter framebuffer attachments, read/write buffer or image memory, or write to query pools."
Edit: At first I thought that set state commands would act as some kind of barrier to ensure that draw commands are in order. I have already been explained that this is wrong. So I look at this example of bloom effect in Vulkan
https://github.com/SaschaWillems/Vulkan/blob/master/examples/bloom/bloom.cpp
/*First render pass: Render glow parts of the model (separate mesh) to an offscreen frame buffer*/
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayouts.scene, 0, 1, &descriptorSets.scene, 0, NULL);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.glowPass);
VkDeviceSize offsets[1] = { 0 };
vkCmdBindVertexBuffers(drawCmdBuffers[i], 0, 1, &models.ufoGlow.vertices.buffer, offsets);
vkCmdBindIndexBuffer(drawCmdBuffers[i], models.ufoGlow.indices.buffer, 0, VK_INDEX_TYPE_UINT32);
vkCmdDrawIndexed(drawCmdBuffers[i], models.ufoGlow.indexCount, 1, 0, 0, 0);
vkCmdEndRenderPass(drawCmdBuffers[i]);
/*Second render pass: Vertical blur
Render contents of the first pass into a second framebuffer and apply a vertical blur
This is the first blur pass, the horizontal blur is applied when rendering on top of the scene*/
renderPassBeginInfo.framebuffer = offscreenPass.framebuffers[1].framebuffer;
vkCmdBeginRenderPass(drawCmdBuffers[i], &renderPassBeginInfo, VK_SUBPASS_CONTENTS_INLINE);
vkCmdBindDescriptorSets(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelineLayouts.blur, 0, 1, &descriptorSets.blurVert, 0, NULL);
vkCmdBindPipeline(drawCmdBuffers[i], VK_PIPELINE_BIND_POINT_GRAPHICS, pipelines.blurVert);
vkCmdDraw(drawCmdBuffers[i], 3, 1, 0, 0);
vkCmdEndRenderPass(drawCmdBuffers[i]);
Here are the 2 subpass dependencies used by both render passes
dependencies[0].srcSubpass = VK_SUBPASS_EXTERNAL;
dependencies[0].dstSubpass = 0;
dependencies[0].srcStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[0].dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
dependencies[0].srcAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[0].dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
dependencies[0].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT;
dependencies[1].srcSubpass = 0;
dependencies[1].dstSubpass = VK_SUBPASS_EXTERNAL;
dependencies[1].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
dependencies[1].dstStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[1].srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
dependencies[1].dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[1].dependencyFlags = VK_DEPENDENCY_BY_REGION_BIT;
My understanding then becomes that these 2 subpass dependencies are responsible for the execution ordering of the render pass but I'm not sure how yet since I'm still fuzzy about subpass dependency. If I'm correct in my understanding can you explain to me why the subpass dependency helps order the draw command? If I'm wrong then what is ensuring the draw command order?
So what is happening is that somehing is rendered to img1 (as color attachment). Then
img1 is sampled, and stuff is written to img2 (as color attachment). Then img2 is sampled and written to a swapchain image.
dependencies[0].srcSubpass = VK_SUBPASS_EXTERNAL;
dependencies[0].srcStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[0].srcAccessMask = VK_ACCESS_SHADER_READ_BIT;
dependencies[0].dstSubpass = 0;
dependencies[0].dstStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
dependencies[0].dstAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
For the first and second render pass instance, this possibly blocks some previous sampling of the Resource. Probably from the previous frame. Assuming there is not some other sync between subsequent frames.
dependencies[1].srcSubpass = 0;
dependencies[1].srcStageMask = VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT;
dependencies[1].srcAccessMask = VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT;
dependencies[1].dstSubpass = VK_SUBPASS_EXTERNAL;
dependencies[1].dstStageMask = VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT;
dependencies[1].dstAccessMask = VK_ACCESS_SHADER_READ_BIT;
Now the color attachment is written in VK_PIPELINE_STAGE_COLOR_ATTACHMENT_OUTPUT_BIT and also more importantly (and conveniently), the Store Operation happens in this same stage for color attachments. It is also always VK_ACCESS_COLOR_ATTACHMENT_WRITE_BIT irregardless whether it is STORE or DONT_CARE.
VK_PIPELINE_STAGE_FRAGMENT_SHADER_BIT and VK_ACCESS_SHADER_READ_BIT is again a good match for image sampling (in the fragment shader).
So this means that img1 is fully rendered and stored from the first render pass instance, before it is sampled by the second render pass instance.
And it also means img2 is fully rendered and stored from the second render pass instance, before it is sampled by the third render pass instance.
This is an advanced sample, and you are somewhat expected to already understand synchronization.
State commands are not subject to synchronization. They only change the context of subsequent action commands as soon as they are introduced and typically last until the end of the command buffer, or until the state is changed again.
Subpass dependencies and barriers define a dependency in this way: src synchronization scope finishes execution before dst synchronization scope begins execution.
Subpass dependencies and barriers practically the same. Barriers are typically used outside a render pass, while a subpass dependencies inside it. Subpasses are unordered to each other, so subpass dependencies additionally have *Subpass parameter, and synchronization scopes are limited to only that stated subpass. VK_SUBPASS_EXTERNAL means that stuff before vkCmdBeginRenderPass \ after vkCmdEndRenderPass is part of the synchronization scope.
It takes time to understand the synchronization system, and I cannot properly cover it here. I have little bit more extended answer on barriers at Using pipeline barriers instead of semaphores, otherwisely there is also internet full of resources.

Add edge gremlin query in Nodejs

Here is the code for adding Tribe Vertex
let addTribe = g.addV('tribe')
addTribe.property('tname', addTribeInput.tribename)
addTribe.property('tribeadmin', addTribeInput.tribeadmin)
const newTribe = await addTribe.next()
and Here is the code for adding Edges
const addMember = await
g.V(addTribeInput.tribeadmin).addE('member').
to(g.V(newTribe.value.id)).next()
Is this is a correct way of adding edges?
I am just confusing what should I need to pass in .to() methoud
Gremlin is meant to be chained, so unless you have an explicit reason to break things up, it's much nicer to just do:
g.addV('tribe').
property('tname', addTribeInput.tribename).
property('tribeadmin', addTribeInput.tribeadmin).as('x').
V(newTribe.value.id).as('y').
addE('member').
from('x').
to('y')
Given your variable names I'm not completely sure that I'm doing what you want exactly (e.g. getting the edge direction right), but the point here is that for adding edges you just need to specify the direction of the edge "from" one vertex (i.e. the starting vertex) "to" another vertex (i.e. the ending vertex).

Applying multiple AffineTransformations in Batik

I am trying to scale and translate an SVG image in Batik.
To do the zooming I use
AffineTransform at= new AffineTransform();
at.scale(sx, sy);
at.translate(tx, ty);
canvas.setRenderingTransform(at, true);
That works quite fine (after I found out, that the sx, sy, tx and ty values must be screen coordinates, not SVG coordinates.
But I want to allow multiple scaling operations.
The problem is: I do not manage to "add" another transformation to the existing one.
I tried it by first reverting the old transformation and then appying the new one. But that gets me to another problem: The reversion doesn't work! It leads to an image that is smaller than the original one (thus zooming out).
I experimented a bit and tried to apply a transformation, then apply the inverse and then apply the original one again:
final AffineTransform at= new AffineTransform();
at.scale(zoom.sx, zoom.sy);
at.translate(zoom.tx, zoom.ty);
canvas.setRenderingTransform(at, true);
...
final AffineTransform reverseAt = at.createInverse();
canvas.setRenderingTransform(reverseAt, true);
...
final AffineTransform reverseBackAt= reverseAt.createInverse();
canvas.setRenderingTransform(reverseBackAt, true);
The first transformation is correct. The second one leads to rubbish, but appying the original one (or the inverse of the inverse) again, leads to the correct result.
So actually, there are two questions:
What is the best way, to apply multiple zooming operations?
Why is the result of the inverse transformation not what I expected?
To answer your first question, use AffineTransform.concatenate():
AffineTransform firstTransform = new AffineTransform();
at.scale(sx, sy);
at.translate(tx, ty);
// Example: double all sizes
AffineTransform secondTransform = AffineTransform.getScaleInstance(2, 2)
secondTransform.concatenate(firstTransform);
canvas.setRenderingTransform(secondTransform, true);

Dealing with integer-valued features for CRF in mallet

I am just starting to use the SimpleTagger class in mallet. My impression is that it expects binary features. The model that I want to implement has positive integer-valued features and I wonder how to implement this in mallet. Also, I heard that non-binary features need to be normalized if the model is to make sense. I would appreciate any suggestions on how to do this.
ps. yes, I know that there is a dedicated mallet mail list but I am waiting for nearly a day already to get my subscription approved to be able to post there. I'm simply in a hurry.
Well it's 6 years later now. If you're not in a hurry anymore, you could check out the Java API to create your instances. A minimal example:
private Instance createInstance(LabelAlphabet labelAlphabet){
// observations and labels should be equal size for linear chain CRFs
TokenSequence observations = new TokenSequence();
LabelSequence labels = new LabelSequence(labelAlphabet, n);
observations.add(createToken());
labels.add("idk, some target or something");
return new Instance(
observations,
label,
"myInstance",
null
);
}
private Token createToken() {
Token token = new Token("exampleToken");
// Note: properties are not used for computing (I think)
token.setProperty("SOME_PROPERTY", "hello");
// Any old double value
token.setFeatureValue(featureVal, 666.0);
// etc for more features ...
return token;
}
public static void main(String[] args){
// Note the first arg is false to denote we *do not* deal with binary features
InstanceList instanceList = new InstanceList(new TokenSequence2FeatureVectorSequence(false, false));
LabelAlphabet labelAlphabet = new LabelAlphabet();
// Converts our tokens to feature vectors
instances.addThruPipe(createInstance(labelAlphabet));
}
Or, if you want to keep using SimpleTagger, just define binary features like HAS_1_LETTER, HAS_2_LETTER, etc, though this seems tedious.

Resources