Free tool to generate all paths from a diagram - modeling

Good afternoon everyone,
Dispite a lot of research on the web I didn't found a solution that meets my need.
I need to find a free tool to modelize process (like BPMN, UML activity diagram) and generate all possible paths/combinations from the diagram.
Do you have any idea what tool can help me do that? Thank you a lot.
Update 1

I am not sure that such tool on the shell exists. My advise would be to choose one modelling tool which
supports your modelisation (BPMN, Activity, etc.),
can be extended with a language you are confortable with (Python, Java, C#, etc.).
In this case, you will find several tools for sure.
For fun, I picked Modelio (https://www.modelio.org/),
made a small activity example,
and a Jython script for it.
## return first initial node in the selected activity
def getInitialPoint(act):
for node in act.getOwnedNode():
if isinstance(node, InitialNode):
return node
## parcours activity nodes
def getPaths(currentPath, currentNode):
for outgoing in currentNode.getOutgoing():
node = outgoing.getTarget()
if isinstance(node, ActivityFinalNode):
paths.append(currentPath)
return;
elif isinstance(node, DecisionMergeNode):
getPaths(currentPath, node)
else:
getPaths(currentPath + " - " + node.getName(), node)
##Init
init = getInitialPoint(elt)
currentPath = init.getName()
global paths
paths = []
getPaths(currentPath, init)
##Print founded paths
for p in paths:
print p
Hoping it helps,
EBR

Related

Extracting labels from owl ontologies when the label isn't in the ontology but can be found at the URI

Please bear with me as I am new to semantic technologies.
I am trying to use the package rdflib to extract labels from classes in ontologies. However some ontologies don't contain the labels themselves but have the URIs of classes from other ontologies. How does one extract the labels from URIs of the external ontologies?
The intuition behind my attempts center on identifying classes that don't contain labels locally (if that is the right way of putting it) and then "following" their URIs to the external ontologies to extract the labels. However the way I have implemented it does not work.
import rdflib
g = rdflib.Graph()
# I have no trouble extracting labels from this ontology:
# g.load("http://purl.obolibrary.org/obo/po.owl#")
# However, this ontology contains no labels locally:
g.load("http://www.bioassayontology.org/bao/bao_complete.owl#")
owlClass = rdflib.namespace.OWL.Class
rdfType = rdflib.namespace.RDF.type
for s in g.subjects(predicate=rdfType, object=owlClass):
# Where label is present...
if g.label(s) != '':
# Do something with label...
print(g.label(s))
# This is what I have added to try to follow the URI to the external ontology.
elif g.label(s) == '':
g2 = rdflib.Graph()
g2.parse(location=s)
# Do something with label...
print(g.label(s))
Am I taking completely the wrong approach? All help is appreciated! Thank you.
I think you can be much more efficient than this. You are trying to do a web request, remote ontology download and search every time you encounter a URI that doesn't have a label given in http://www.bioassayontology.org/bao/bao_complete.owl which is most of them and it's a very large number. So your script will take forever and thrash the web servers delivering those remote ontologies.
Looking at http://www.bioassayontology.org/bao/bao_complete.owl, I see that most of the URIs without labels there are from OBO, and perhaps a couple of other ontologies, but mostly OBO.
What you should do is download OBO once and load that with RDFlib. Then if you run your script above on the joined (union) graph of http://www.bioassayontology.org/bao/bao_complete.owl & OBO, you'll have all OBO's content at your fingertips so that g.label(s) will find a much higher proportion of labels.
Perhaps there are a couple of other source ontologies providing labels for http://www.bioassayontology.org/bao/bao_complete.owl you may need as well but my quick browsing sees only OBO.

How to merge nodes and relationships using py2neo v4 and Neo4j

I am trying to perform a basic merge operation to add nonexistent nodes and relationships to my graph by going through a csv file row by row. I'm using py2neo v4, and because there is basically no documentation or examples of how to use py2neo, I can't figure out how to actually get it done. This isn't my real code (it's very complicated to handle many different cases) but its structure is basically like this:
import py2neo as pn
graph = pn.Graph("bolt://localhost:###/", user="neo4j", password="py2neoSux")
matcher = pn.NodeMatcher(graph)
tx = graph.begin()
if (matcher.match("Prefecture", name="foo").first()) == None):
previousNode = pn.Node("Type1", name="fo0", yc=1)
else:
previousNode = matcher.match("Prefecture", name="foo").first())
thisNode = pn.Node("Type2", name="bar", yc=1)
tx.merge(previousNode)
tx.merge(thisNode)
theLink = pn.Relationship(thisNode, "PARTOF", previousNode)
tx.merge(theLink)
tx.commit()
Currently this throws the error
ValueError: Primary label and primary key are required for MERGE operation
the first time it needs to merge a node that it hasn't found (i.e., when creating a node). So then I change the line to:
tx.merge(thisNode,primary_label=list(thisNode.labels)[0], primary_key="name")
Which gives me the error IndexError: list index out of range from somewhere deep in the py2neo source code (....site-packages\py2neo\internal\operations.py", line 168, in merge_subgraph at node = nodes[i]). I tried to figure out what was going wrong there, but I couldn't decipher where the nodes list come from through various connections to other commands.
So, it currently matches and creates a few nodes without problem, but at some point it will match until it needs to create and then fails in trying to create that node (even though it is using the same code and doing the same thing under the same circumstances in a loop). It made it through all 20 rows in my sample once, but usually stops on the row 3-5.
I thought it had something to do with the transactions (see comments), but I get the same problem when I merge directly on the graph. Maybe it has to do with the py2neo merge function finding more identities for nodes than nodes. Maybe there is something wrong with how I specified my primarily label and/or key.
Because this error and code are opaque I have no idea how to move forward.
Anybody have any advice or instructions on merging nodes with py2neo?
Of course I'd like to know how to fix my current problem, but more generally I'd like to learn how to use this package. Examples, instructions, real documentation?
I am having a similar problem and just got done ripping my hair out to figure out what was wrong! SO! What I learned was that at least in my case.. and maybe yours too since we got similar error messages and were doing similar things. The problem lied for me in that I was trying to create a Node with a __primarykey__ field that had a different field name than the others.
PSEUDO EXAMPLE:
# in some for loop or complex code
node = Node("Example", name="Test",something="else")
node.__primarykey__ = "name"
<code merging or otherwise creating the node>
# later on in the loop you might have done something like this cause the field was null
node = Node("Example", something="new")
node.__primarykey__ = "something"
I hope this helps and was clear I'm still recovering from wrapping my head around things. If its not clear let me know and I'll revise.
Good luck.

too much error correction made the code more worse

This is my first time with python and I have written this program to simulate a node trying to find mobile phones in an area.
first i am taking the distance converting them to RSS and based on that deciding the direction of the node to find all mobile phones.
the code worked fine till 2 days ago but when i expanded to area and no of nodes, some errors started to come and i kept correcting them.
now it doesn't run and I have made it more worse:
is there any hint or guidance I can get from the experts?
and yes I have overwritten the code without saving it to some other place.
Or maybe someone can really help me by having a look at it.
its really messy though.
thanks
Here is a part of the code i am having the most trouble with.
def pos_y(uav_current_cord,area,uav_step_size,flag):
detected_y = 0
detected_nodes_pos_y = 0
if flag == 1:
span = area+uav_step_size - uav_current_cord[1]
#print(span)
for y in range(uav_current_cord[1],area+uav_step_size,uav_step_size):
global distance_covered
distance_covered += uav_step_size
UAV_new = [uav_current_cord[0],y]
y_last_pos = UAV_new
plot((*UAV_new), marker='o', color='r', ls='')
distance_new=[]
for i in nodes:
temp_x_axis = euclid_dist(UAV_new,node_cord[i])
ss_x_axis = dist_to_ss(temp_x_axis)
if (ss_x_axis > threshold):
detected_nodes_pos_y += 1
detected_y = (detected_nodes_pos_y)
if y >= area+uav_step_size:
uav_current_cord = UAV_new
flag = 1
keep_moving = neg_x(uav_current_cord,span,uav_step_size,flag,y_last_pos)
uav_current_new = keep_moving[0]
distance_covered_back = keep_moving[1]
nodes_detected_final = keep_moving[2]
uav_current_new = keep_moving[3]
#return [uav_current_new, distance_covered, nodes_detected_final, uav_current_new]
#uav_current_new = y_last_pos
return [y_last_pos, distance_covered, detected_y, y_last_pos]
if flag == 1:
area1 = area*2
for y in range(uav_current_cord[1],area,uav_step_size):
UAV_new = [uav_current_cord[0],y]
#print('Pos_Y movement of UAV',UAV_new)
#print('ye loop chal raha hai')
y_last_pos = UAV_new
plot((*UAV_new), marker='o', color='g', ls='')
distance_new=[]
for i in nodes:
temp_x_axis = euclid_dist(UAV_new,node_cord[i])
ss_x_axis = dist_to_ss(temp_x_axis)
if (ss_x_axis > threshold):
detected_nodes_pos_y += 1
detected_y = (detected_nodes_pos_y)
#print('nodes detected in pos Y ',detected_y)
#print('last position in pos y =', y_last_pos)
for y in range(UAV_new[1],uav_current_cord[1],-uav_step_size):
# print('ab ye chala hai')
UAV_new=[uav_current_cord[0],y]
plot((*UAV_new), marker='o', color='g', ls='')
distance_covered_back = area*2
return [UAV_new, distance_covered_back, detected_y, y_last_pos]
As others have said, use git, but know the alternatives.
If you're brand new to revision control, starting with Git will either be a big help or hindrance in the time being, but learning Git will be valuable down the line regardless.
If you have an MSDN account, you can also use Visual Studio's built-in revision control. It's good for rapid prototyping, but has no real edge over git.
Another common choice that is pretty simple would be TortoiseSVN. It's very easy to use.
Alternatively, if you don't mind your code being public for a free account or a small monthly fee for private, you can do all your commits via browser using GitHub. This is hand's down the simplest option for revision control newbies. It's major downside is limited flexibility for multi-file commits.
I have little to no meaningful experience with BitBucket and some other common alternatives to GitHub.
I'm sorry, but we cannot help you... You're not the first with this problem and unfortunately not the last.
To help prevent this kind of problems, software like Git exists. It let's you do the following (image reference):
The root file is the master. This is your working script. If you want to change/add/remove something, you make a so-called branch where you do whatever you want. If you have tested it, you can commit the changes (that is, store this as a new version of the script with a comment) and merge it with the master (that is, add the changes to the master script). If something seems to be wrong, you can always go back to a previous version of your script.
The documentation of Git is very good: read it and use it wisely!

How to implement different data for cucumber scenarios based on environment

I have an issue with executing the cucumber-jvm scenarios in different environments. Data that is incorporated in the feature files for scenarios belongs to one environment. To execute the scenarios in different environemnts, I need to update the data in the features files as per the environment to be executed.
for example, in the following scenario, i have the search criteria included in the feature file. search criteria is valid for lets say QA env.
Scenario: search user with valid criteria
Given user navigated to login page
And clicked search link
When searched by providing search criteria
|fname1 |lname1 |address1|address2|city1|state1|58884|
Then verify the results displayed
it works fine in QA env. But to execute the same scenario in other environments (UAT,stage..), i need to modify search criteria in feature files as per the data in those environments.
I'm thinking about maintaing the data for scenarios in properties file for different environments and read it based on the execution environment.
if data is in properties file, scenario will look like below. Instead of the search criteria, I will give propertyName:
Scenario: search user with valid criteria
Given user navigated to login page
And clicked search link
When searched by providing search criteria
|validSearchCriteria|
Then verify the results displayed
Is there any other way I could maintain the data for scenarios for all environments and use it as per the environment the scenario is getting executed? please let me know.
Thanks
I understand the problem, but I don't quite understand the example, so allow me to provide my own example to illustrate how this can be solved.
Let's assume we test a library management software and that in our development environment our test data have 3 books by Leo Tolstoy.
We can have test case like this:
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "3"
Now let's assume we create our QA test environment and in that environment we have 5 books by Leo Tolstoy. The question is how do we modify our test case so it works in both environments?
One way is to use tags. For example:
#dev_env
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "3"
#qa_env
Scenario: Search by Author
When I search for "Leo Tolstoy" books
Then I should get result "5"
The problem here is that we have lots of code duplication. We can solve that by using Scenario Outline, like this:
Scenario Outline: Search by Author
When I search for "Leo Tolstoy"
Then I should see "<number_of_books>" books
#qa_env
Examples:
| number_of_books |
| 5 |
#dev_env
Examples:
| number_of_books |
| 3 |
Now when you execute the tests, you should use #dev_env tag in dev environment and #qa_env in QA environment.
I'll be glad to hear some other ways to solve this problem.
You can do this in two ways
Push the programming up, so that you pass in the search criteria by the way you run cucumber
Push the programming down, so that your step definition uses the environment to decide where to get the valid search criteria from
Both of these involve writing a more abstract feature that does not specify the details of the search criteria. So you should end up with a feature that is like
Scenario: Search with valid criteria
When I search with valid criteria
Then I get valid results
I would implement this using the second method and write the step definitions as follows:
When "I search with valid criteria" do
search_with_valid_criteria
end
module SearchStepHelper
def search_with_valid_criteria
criteria = retrieve_criteria
search_with criteria
end
def retrieve_criteria
# get the environment you are working in
# use the environment to retrieve the search criteria
# return the criteria
end
end
World SearchStepHelper
Notice that all I have done is change the place where you do the work, from the feature, to a helper method in a module.
This means that as you are doing your programming in a proper programming language (rather than in the features) you can do whatever you want to get the correct criteria.
This may have been answered elsewhere but the team I work with currently tends to prefer pushing environmental-specific pre-conditions down into the code behind the step definitions.
One way to do this is by setting the environment name as an environment variable in the process running the test runner class. An example could be $ENV set to 'Dev'. Then #Before each scenario is tested it is possible verify the environment in which the scenario is being executed and load any environment-specific data needed by the scenario:
#Before
public void before(Scenario scenario) throws Throwable {
String scenarioName = scenario.getName();
env = System.getenv("ENV");
if (env == null) {
env = "Dev";
}
envHelper.loadEnvironmentSpecificVariables();
}
Here we set a 'default' value of 'Dev' in case the test runner is run without the environment variable being set. The envHelper points to a test utility class with the method loadEnvironmentSpecificVariables() that could load data from a JSON, csv, XML file with data specific to the environment being tested against.
An advantage of this approach is that it can help to de-clutter Feature files from potentially distracting environmental meta-data which can impact the readability of the feature outside of the development and testing domains.

Platform-independent way of locating personal R library/libraries

Actual question
How can I query the default location of a personal package library/libraries as described in the R Installation and Adminstration even after environment variables like R_LIBS_USER or .libPaths() etc. might already have been changed by the user?
I'd just like to understand how exactly R determines the default settings in a platform-independent way.
Naively, I was hoping for something equivalent to R.home("library"), e.g. R.user("library")
Due dilligence
I checked this post and the answers sort contain the information/paths I'd like to retrieve. Unfortunately I only really know my way around on Windows, not on OS X or Linux. So I'm not sure if/how much of this is correct in a generic sense (home directory, separation of user vs. system-wide stuff etc.):
OS X
/Library/Frameworks/R.framework/Resources/library
Linux
/usr/local/lib/R/site-library
/usr/lib/R/site-library
/usr/lib/R/library
I also looked into the manual, but that only gave me a basic idea of how R handles these sort of things (maybe just looked in the wrong corner, any pointers greatly appreciated).
Background
I sometimes create a temporary, fresh package library for the purpose of having a "sandbox" for systematic testing (e.g. when planning to upgrade certain package dependencies) .
When I'm done, I'd like to delete that library again while making absolutely sure that I don't accidentally delete one of the standard libraries (personal library/libraries and system-wide library).
I'm starting to put together a little package called libr for these purposes. Function deleteLibrary contains my current approach (lines 76 ff.):
## Personal libs //
r_vsn <- paste(R.version$major, gsub("\\..*", "", R.version$minor), sep = ".")
if (.Platform$pkgType == "win.binary") {
lib_p <- file.path(Sys.getenv("HOME"), "R/library", r_vsn)
} else if (.Platform$OS.type == "mac.binary") {
lib_p <- file.path(Sys.getenv("HOME"), "lib/R", r_vsn)
## Taken from https://stackoverflow.com/questions/2615128/where-does-r-store-packages
## --> Hopefully results in something like: '/Users/{username}/lib/R/{version}'
} else if (.Platform$OS.type == "source" && .Platform$OS.type == "unix") {
lib_p <- file.path(Sys.getenv("HOME"),
c(
"local/lib/R/site-library",
"lib/R/site-library",
"lib/R/library"
), r_vsn)
## Taken from https://stackoverflow.com/questions/2615128/where-does-r-store-packages
## --> Hopefully results in something like:
## '/usr/local/lib/R/site-library/{version}'
## '/usr/lib/R/site-library/{version}'
## '/usr/lib/R/library/{version}'
} else {
stop("Don't know what to do for this OS type")
}

Resources