Get Rid with Duplicate Dialog Nodes Watson Conversation - dialog

I am trying to build a Watson Conversation for an application. I have created a single intent and it has multiple child dialog nodes. I am having two sibling dialog nodes having same child nodes and the hierarchy would be repeated.
So, is there any way to handle this situation? (I mean to reduce duplicate nodes or to reuse the existing nodes.) Because it repeats the nodes multiple times for each sibling dialog nodes.
Below image is self-explanatory.
When you look at the image below, you see there are two dialog nodes are similar for both siblings nodes(#boolean:yes / #boolean:no).
So, Without creating two similar nodes, how can I create a common node which will be used by both siblings?
Any help, please...

To solve your issue you can use a continue from and point it to the input node prior to where you want to continue on with the tree.

Related

Boostrap many new cassandras to cluster with no errors

I have cluster about 100 nodes and it grows. I need to add 10-50 on request. As I know by default cassandra has cassandra.consistent.rangemovement=true this means multiple nodes can't to bootstrap in a moment.
Anyway when I add many nodes using Terraform and some kind of default configuration (using Puppet) at least 2-3 becomes UJ state and eventually only one bootstrap successfully. Earlier I used random time delay before start cassandra.service, but it doesn't work adding 10+ nodes.
I'm trying to figure out how to implement kind of "lock" for bootstrap.
I have Consul and can get kind of lock for bootstrap in KV. For instance get lock using ExecPreStart systemd feature but I can't get how to release it after bootstrap.
I'm looking for any solutions for that.
I've done something similar using Rundeck before. Basically, we had Rundeck kick off a bash script, taking parameters about the deployment of our nodes as well as how many.
What we did, was parse the output of nodetool status. We'd count the number of nodes as well as the number of UN indicators. If those two numbers didn't match, we'd do a sleep 30s and try again.
Once those numbers matched, we knew that it was safe to add another node. The total operation could take a while to add all nodes, but it worked.

Neo4j : How to create a unique node instead of set of nodes

I want to create a new node (event nodes) among a set of nodes (report nodes) according to the indicator nodes (each report node has several indicator nodes related to it). I want to set the new event nodes with the rules:
a report nodes is only connected one event node
if more than one indicator nodes has the same property "pattern", then they belong to the same event node
here are my query code :
OPTIONAL MATCH
(indicator_1_1:indicator)<-[:REFERS_TO]-(report_1:report)-[:REFERS_TO]->(indicator_1_2:indicator),
(indicator_2_1:indicator)<-[:REFERS_TO]-(report_2:report)-[:REFERS_TO]->(indicator_2_2:indicator)
WHERE
indicator_1_1.pattern=indicator_2_1.pattern
and
indicator_1_2.pattern=indicator_2_2.pattern
MERGE
(report_1)-[:related_to]->(event:EVENT)<-[:related_to]-(report_2)
and get the result as below,
But i want the three report nodes belong to one event node.
I want to know what changes should I make to my query ,or what next step should I take after getting the two event nodes.
What's more , I want to know wheter there is a more efficient query code than mine.
Thanks!
I don't have any data to confirm, but I think a small change to your Cypher query will produce what you want.
From the Neo4j Cypher Manual chapter on MERGE (my emphasis added).
When using MERGE on full patterns, the behavior is that either the
whole pattern matches, or the whole pattern is created. MERGE will
not partially use existing patterns — it’s all or nothing. If
partial matches are needed, this can be accomplished by splitting a
pattern up into multiple MERGE clauses.
So, following this, I think if you change
MERGE (report_1)-[:related_to]->(event:EVENT)<-[:related_to]-(report_2)
to
MERGE (report_1)-[:related_to]->(event:EVENT)
MERGE (event)<-[:related_to]-(report_2)
... you will prevent the extra :EVENT nodes from being created and get the graph you are looking for.
Finally, I find the answer. My solution is merge the :event node ,and then the relaionships
step 1 : merge the :event nodes
MATCH ()-[r_1:related_to]->(event_1:EVENT)<-[r_2:related_to]-()-[r_3:related_to]->(event_2:EVENT)<-[r_4:related_to]-()
call apoc.refactor.mergeNodes([event_1,event_2]) YIELD node
RETURN node
step 2 : merge the dupicate relationships
MATCH (X)-[r]-(Y)
WITH X,Y, TAIL (collect(r)) as rr
FOREACH (r IN rr | DELETE r)

slurm - I/O shared between to nodes? Is that possible?

I am working with NGS data and the newest test files are massive.
Normally our pipeline is using just one node and the output from different tools is its ./scratch folder.
To use just one node is not possible with the current massive data set. That's why I would like to use at least 2 nodes to solve the issues such as speed, not all jobs are submitted, etc.
Using multiple nodes or even multiple partitions is easy - i know how which parameter to use for that step.
So my issue is not about missing parameters, but the logic behind slurm to solve the following issue about I/O:
Lets say I have tool-A. Tool-A is running with 700 jobs on two nodes (340 jobs on node1 and 360 jobs on node2) - the ouput is saved on ./scratch on each node separately.
Tool-B is using the results from tool-A - which are on two different nodes.
What is the best approach to fix that?
- Is there a parameter which tells slurm which jobs belongs together and where to find the input for tool-B?
- would it be smarter to change the output on /scratch to a local-folder?
- or would it be better to merge the output from tool-A from both nodes to one node?
- any other ideas?
I hope I made my issue "simply" to understand... Please apologize if that is not the case!
My naive suggestion would be why not share a scratch nfs volume across all nodes ? This way all ouput datas of ToolA would be acessible for ToolB whatever the node. It migth not be the best solution for read/write speed, but to my mind it would be the easiest for your situation.
A more sofware solution (not to hard to develop) can be to implement a database that track where the files have been generated.
I hope it help !
... just for those coming across this via search engines: if you cannot use any kind of shared filesystem (NFS, GPFS, Lustre, Ceph) and you don't have only massive data sets, you could use "staging", meaning data transfer before and after your job really runs.
Though this is termed "cast"ing in the Slurm universe, it generally means you define
files to be copied to all nodes assigned to your job BEFORE the job starts
files to be copied from nodes assigned to your job AFTER the job completes.
This can be a way to get everything needed back and forth from/to your job's nodes even without a shared file system.
Check the man page of "sbcast" and amend your sbatch job scripts accordingly.

Routing table creation at a node in a Pastry P2P network

This question is about the routing table creation at a node in a p2p network based on Pastry.
I'm trying to simulate this scheme of routing table creation in a single JVM. I can't seem to understand how these routing tables are created from the point of joining of the first node.
I have N independent nodes each with a 160 bit nodeId generated as a SHA-1 hash and a function to determine the proximity between these nodes. Lets say the 1st node starts the ring and joins it. The protocol says that this node should have had its routing tables set up at this time. But I do not have any other nodes in the ring at this point, so how does it even begin to create its routing tables?
When the 2nd node wishes to join the ring, it sends a Join message(containing its nodeID) to the 1st node, which it passes around in hops to the closest available neighbor for this 2nd node, already existing in the ring. These hops contribute to the creation of routing table entries for this new 2nd node. Again, in the absence of sufficient number of nodes, how do all these entries get created?
I'm just beginning to take a look at the FreePastry implementation to get these answers, but it doesn't seem very apparent at the moment. If anyone could provide some pointers here, that'd be of great help too.
My understanding of Pastry is not complete, by any stretch of the imagination, but it was enough to build a more-or-less working version of the algorithm. Which is to say, as far as I can tell, my implementation functions properly.
To answer your first question:
The protocol says that this [first] node should have had its routing tables
set up at this time. But I do not have any other nodes in the ring at
this point, so how does it even begin to create its routing tables?
I solved this problem by first creating the Node and its state/routing tables. The routing tables, when you think about it, are just information about the other nodes in the network. Because this is the only node in the network, the routing tables are empty. I assume you have some way of creating empty routing tables?
To answer your second question:
When the 2nd node wishes to join the ring, it sends a Join
message(containing its nodeID) to the 1st node, which it passes around
in hops to the closest available neighbor for this 2nd node, already
existing in the ring. These hops contribute to the creation of routing
table entries for this new 2nd node. Again, in the absence of
sufficient number of nodes, how do all these entries get created?
You should take another look at the paper (PDF warning!) that describes Pastry; it does a rather good job of explain the process for nodes joining and exiting the cluster.
If memory serves, the second node sends a message that not only contains its node ID, but actually uses its node ID as the message's key. The message is routed like any other message in the network, which ensures that it quickly winds up at the node whose ID is closest to the ID of the newly joined node. Every node that the message passes through sends their state tables to the newly joined node, which it uses to populate its state tables. The paper explains some in-depth logic that takes the origin of the information into consideration when using it to populate the state tables in a way that, I believe, is intended to reduce the computational cost, but in my implementation, I ignored that, as it would have been more expensive to implement, not less.
To answer your question specifically, however: the second node will send a Join message to the first node. The first node will send its state tables (empty) to the second node. The second node will add the sender of the state tables (the first node) to its state tables, then add the appropriate nodes in the received state tables to its own state tables (no nodes, in this case). The first node would forward the message on to a node whose ID is closer to that of the second node's, but no such node exists, so the message is considered "delivered", and both nodes are considered to be participating in the network at this time.
Should a third node join and route a Join message to the second node, the second node would send the third node its state tables. Then, assuming the third node's ID is closer to the first node's, the second node would forward the message to the first node, who would send the third node its state tables. The third node would build its state tables out of these received state tables, and at that point it is considered to be participating in the network.
Hope that helps.

How can I model this relationship in Xcode 4?

IOS5.1 / XCode 4.3
I have 2 entities in my core data, lets call them Jobs and Workers, each Job has a Joiner, a Brickie, and a Plumber, these are fields which i want to relate to 3 different workers.
A Workers role is a text property that is populated from a pick list when the worker is created.
As the Workers don't have specific Role fields, I'm unsure how to satisfy core data's need for inverse relationships.
Any help would be appreciated, this is my first core data project and I'm not even sure if my model is appropriate for this kind of storage.
Thanks
Thanks for the quick and clear answer Matthias, Just to clarify, I will add new worker roles to the pick list in Xcode, the user won't have that feature.
I considered option 1 but rejected it because of the work involved when adding a new role.
I like option 2 better, question: would the workers relationship in the job object be a collection of all workers associated with the job? and would the fetched properties be generated on the fly from the Role properties in that collection.
If i didn't use the fetched properties would i need to iterate the workers relationship to find the plumber rather than having a direct link?
this site won't let me add comments to answers or even my own question, so I've had to put a response here :(
Option 1: Create a different entity for each worker. They could all have a parent entity like Worker where you would put common attributes.
Option 2: Add a role attribute to the worker. You could create fetched relationships to get the different kind of workers.
If you want it dynamic (e.g. in the year 2525 somebody uses your app and he needs a technician to install a teleportation device) choose option 2. But then without fetched relationships.

Resources