MPS run Model2Model transformation and TextGen from action - m2m

I am developing a plugin for Schedulability Analysis for mbeddr. In order to run an external tool, I want to transform the mbeddr-model to the external tool's model and use the Textgen aspect in order to create an input file for the external tool. The Analysis will be started from an Action defined in a plugin solution (is this the smartest way?), so how do I trigger the M2M transformation and the textgen from a plugin action?
Thanks

You can have a look at the MakeActionImpl Class from MPS. It basically does it by creating a MakeSession it takes a list of models to generate and you can await the Future that is returned. You need fine grained control about the generators invoked and the facets involved have a look at the MakeUtils from com.mbeddr.core.runconfiguration.pluginSolution It's a good blueprint if you want get deeper integration with the MPS internals. It also contains a easy API to do the simple rebuild/make tasks.

Related

How to encode a taxonomy in Weaviate contextionary

I would like to create a semantic context for my data before vectorizing the actual data in Weaviate (https://github.com/semi-technologies/weaviate).
Lets say we have a taxonomy where we have a set of domain specific concepts together with links to their related concepts. Could you advise me what the best way is to encode not only those concepts but also relations between them using contextionary?
Depending on your use case, there are a few answers possible.
You can create the "semantic context" in a Weaviate schema and use a vectorization module to vectorized the data according to this schema.
You have domain-specific concepts in your data that the out-of-the-box vectorization modules don't know about (e.g., specific abbreviations).
You want to capture the semantic context of (i.e., vectorize) the graph itself before adding it to Weaviate.
The first is the easiest and straightforward one, the last one is the most esoteric.
Create a schema and use a vectorizer for your data
In your case, you would create a schema based on your taxonomy and load the data using an out-of-the-box vectorizer (this configurator helps you to build a Docker-compose file).
I would recommend starting with this anyway, because it will determine your data model and how you can search through and/or classify data. It might even be the case that for your use case this step already solves the problem because the out-of-the-box vectorizers are (bias alert) pretty decent.
Domain-specific concepts
At the moment of writing, Weaviate has two vectorizers, the contextionary and the transformers modules.
If you want to extend Weaviate with custom context, you can extend the contextionary or fine tune and distribute custom transformers.
If you do this, I would highly recommend still taking the first step. Because it will simply improve the results.
Capture semantic context of your graph
I don't think this is what you want, but it possible and quite esoteric. In principle, you can store your vectorized graph in Weaviate, but you need to generate the vectors on your own. For example, at the moment of writing, we are looking at RDF2Vec.
PS:
Because people often ask about the role of ontologies and taxonomies in Weaviate, I've written this blog post.

Access/Use default flow methods

We have test interface created that we use with all our origen flows that has test flow methods defined. I believe there's possibly an issue with our test methods and to debug I'd like to use the default func method rather than the one defined in our interface.
Where would I find this method and what is the correct way to integrate it?
Thanks
func is not a flow method provided by Origen, rather it is a method that is commonly used in examples to indicate the notion of "a functional test".
The underlying API provided by Origen is flow.test(test_object, options), and within most test program source code the flow will be inferred, so often you will see code that simply calls test.
The test object can be just a name, or it can be an object representing the test like a TestInstance or TestSuite object provided by the Teradyne and Advantest APIs respectively.
You can create a test program flow (and documentation for it) by using only the test method, you can see examples of that in the source code for this video on how to create test program flows: http://origen-sdk.org/origen/videos/5-create-program-flow/
A test program however, is comprised of more than just a flow, and you would normally also want the other files that define what the test is to be generated in addition to the flow.
In time, we hope that the Origen community will produce libraries of off-the-shelf methods like func which would generate the complete test as well as inserting into the flow, currently though it is the application's responsibility to create such methods within their test program interface.
See this example source code for how to create a func method that can target multiple tester types - http://origen-sdk.org/origen/videos/6-create-program-tests/
To start with, you shouldn't worry about the multiple tester aspect, just refer to this guide for the APIs that are available to generate the test program artefacts for creating tests on the V93K - http://origen-sdk.org/origen/guides/program/v93k/

custom action method calls in WiX

Lets imagine I've written a custom actions managed class library that I am planning to use in WiX setup project. The class library contains few classes which have "Install" methods. I am planning to launch those methods from my setup package as a custom action, so I mark all of them with the CustomActionAttribute. What will happen then? Will only one method be launched, or all of them, or the compilation of the setup project will fail? Is this considered a good practice at all?
A better practice would be to:
1) Eliminate CA's where possible ( don't reinvent the wheel )
2) Make CA's generic and declarative ( table data driven )
3) Make CA's that are transactional whenever possible ( support rollback )
4) Don't use InstallUtil, use WiX DTF instead
5) Understand custom action context / scheduling concerns
You should never really be installing things with custom actions, since that's what the whole MSI thing is for.
If you do want to do this though, make sure you schedule your actions in the ExecuteInstallSequence table, otherwise they won't run. Additionally, make sure your dll is included in the Binary table, and that your custom actions reference that binary.
I've simulated the problem and got following errors while trying to compile the custom actions class library:
An item with the same key has already been added.
So that means it is impossible to use methods with same names in the class library, or at least we shouldn't do that.

MDD: How dynamic is MDD at runtime?

Over the years, I've investigated a lot of ways to use code generators and MDD. I've always felt that something is lacking: Patching and changes to the model at runtime.
Patching: If you have a code generator, all your classes should look the same. Now you have a single exception. All code generators so far would require that I modify the template or the template engine to make this work.
Wouldn't it be better if I could apply patches to the result of the code generation step to fix the exceptions?
Well, it depends on how you build your model. If fact, it depends on what code generator you are using, its approach, and what it lets you do.
Creating an exception to a rule (model) is more or less against the nature of MDD, unless the applied modeling approach allows you to add exceptions as modeling entities.
I think ABSE is the only modeling approach that accepts "custom code" as a first-class entity, just like a text or an integer. If you create a template that contains a "CustomCode" parameter, you can later add your exception code only when necessary, without breaking your model rules. This can be used to add or replace code. You just need to specify it in your template.
AtomWeaver is a free implementation of the ABSE modeling methodology.
MDD doesn't work because it is based on a view of the domain and not the entire domain. I mean that usually MDD take an XMI in entry coming from an UML diagram. The problem is that this diagram is only a view of the domain and therefore you have many alternative and the real world is a mot more complex specially at deployment stage.
The only company which has provided me real value in my project was Omondo with EclipseUML. EclipseUML doesn't try to do MDD but create UML at diagram level live synchronized with code. Deployment is made using stereotypes which are added in the java annotation in the code. I can therefore model and if I add deployment stereotypes then my application can be deployed immediately/ If I manually change my code, then my model is refactored and all my views updated. If I want to add a documentation then I just add notes in the metamodel. These notes are live available when I click on each element. No more printed documentation needed because live navigation, dynamic views creations etc...
My EclipseUML model is always up to date, and I can deploy it immediately because Java annotations are lived synchronized between the model, the metamodel, the diagrams and the code. Really cool :-) :-)

Formatting XSD scheme for peer review

I designed a data model which is represented by an XSD scheme.
The data model also provides the types that are being used as web service parameters in a WSDL descriptor.
I would like to send the XSD scheme around and ask the people involved to peer review the data model.
What tool or presentation method would you suggest to be used as a basis for peer reviews? The data model should be readable for non-skilled people, at least when it comes to the semantic meanings of the parameters
Edit:
To be more specific: Of course, syntactically, the scheme validates. Actually I'm already working on code which is based on JAXB generated classes. My goal is
to freeze the data model and thus
the input parameters
to make sure
nothing got lost or forgotten from a
semantic (in the meaning of
business-relevant) point of view.
Edit 2
I've been thinking about how it probably would be best to spread a datamodel around. I'm thinking of something like a JavaDoc for XSD schemas. Anyone knows if something like that exists? Basically it would be done with a set of XSLTs, right?
I know the following tools that generate documentation from XML Schema files (XSD):
xs3p
XSLT stylesheet that generates single XHTML from XSD
xsddoc
free / LGPL
mainly XSLT based
JavaDoc like output
see xsddoc examples
xnsdoc
improved commercial version of xsddoc
free for personal/educational use
JavaDoc like output
XSDdoc 2.0
commercial
JavaDoc like output
For small a XML schema, I would probably suggest using the xs3p XSLT stylesheet. For more a complex schema, I suggest using xsddoc.
I recommend using the XSD for something. Specifically, show some actual applications, with examples as real code.
Actual applications are what make a schema interesting. The examples don't have to be big, sophisticated or completely realistic. They just have to compile. Other people will want to copy and paste the code samples.
These examples are the "hello world" of the schema. And they act as a kind of unit test for the schema.
The closest thing to Javadoc for an XML schema that I've seen is running the Javadoc tool on source generated from the schema. This requires two things: 1) That your schema has internal annotation elements documenting it, and that 2) your source generator uses those annotations as Javadoc elements.
The very useful Oxygen XML developer also supports generating documentation, see
http://www.oxygenxml.com/xml_schema_documentation.html
(commercial, but there's a fully functional 30 day trial available)
I'll try it out now, need a simple way to generate a document with all types and available xsd:documentation description as a simple interface description...
** Disclosure : I work for Innovasys, the producer of the documentation tool mentioned below *
You could take a look at Innovasys Document! X. As well as automatically generating a structured and linked page for every element, simple type, complex type, group and attribute group it will also generate linked XSD diagrams (including sequences/choice etc.) and structure tables that include the annotations from your XSDs and make sense of the relationships between the elements in your schemas. The output is template based so you can adapt it to your preferred style and structure. It will build output to web ready html or compiled help files.
Uniquely it also includes a WYSIWYG editor that allows you author additional content to supplement the stuff that's automatically generated and the annotations from the XSD source - so you can provide additional contextual information for your peer review. There is also a Community Extensions feature that allows people viewing the generated output to record comments and feedback and that can be viewed and actioned directly from within Document! X.

Resources