How to add a specific generation phase that would do Model-To-MS Word? - mps

Assume that I have developed a set of behaviours in MPS that allow me to convert an instance of a WordDocumentconcept (and children), which describes a Word processor document, into a MS Word document using POI and that I have been able to implement an action in an MPS plugin that allows me to generate my desired MS Word document by a right click on my root node.
I would like to add this as a phase in the generation process, so that after the Model-To-Model phases, the generation process of MPS does Model-to-MS Word generation instead of Model-to-Text.
Is MPS customizable that way, and what would be the set of concepts to use?

When you run make or rebuild on models in MPS, MPS will start a so called MakeSession in this session MPS executes multiple steps. One step in the make session is for instance "generate", which runs the model to model transformation, and a second one is "textgen", which then writes the resulting model of the generate step to disk by executing the textgen definition of the languages.
These individual steps are called a "facet". You can contribute your own facets into the over all make process. To do so you need to create a plugin aspect in your language and then create a facet in there. In the facet your can declare it's dependencies and priorities. In your case you want to run before textgen but after generation, so that you can access the result of the generation.
The facets can stat their input data in a declarative way. In your case you need the GResource which represents the output of the generator facet. Then you can access the model(s) on it and run your POI code on it.
A minimal example would look like this:
facet RunPoi extends <none> {
Required: Generate, TextGen
<no optional facets>
Targets:
target genWord overrides <none> weight default {
resources policy: transform GResource -> <no output>
Dependencies:
after generate
before textGen
before textGenToMemory
<no properties>
<no queries>
<no config>
(progressMonitor, input)->void {
foreach resource in input {
SModel mdl = resource.model;
// run poi code with mdl
}
}
}
}

Related

Custom library not recognized inside DSL file in groovyscript part

Context: I'm implementing a Jenkins Pipeline. For this, in order to define the pipeline's parameters I implemented the DSL file.
In the DSL file, I have a parameter of ActiveChoiceParam type, called ORDERS. This parameter will allow me to choose one or more orders numbers at the same time.
Problem: What I want to do is to set the values that gets rendered for ORDERS parameter from a custom library. Basically I have a directory my_libraries with a file, orders.groovy. In this file, there is an order class, with a references list property that contains my values.
The code in the DSL file is as follows:
def order = new my_libraries.order()
pipelineJob("111_name_of_pipeline") {
description("pipeline_description")
keepDependencies(false)
parameters {
stringParam("BRANCH", "master", "The branch")
activeChoiceParam("ORDERS") {
description("orders references")
choiceType('CHECKBOX')
groovyScript{
script("return order.references")
fallbackScript("return ['error']")
}
}
}
}
Also, is good to mention that my custom library works well. For example, if I choose to use a ChoiceParam as below, it works, but of course, is not the behaviour I want. I want to select multiple choices.
choiceParam("ORDERS", order.references, "Orders references list but single-choice")
How can I make order.references available in the script part of groovyScript?
I've tried using global instance of order class, instantiate the class directly in the groovyScript, but no positive result.

Run Initialization Code when a Script is Loaded

I have a GDScript file, and I would like to be able to run a block of code whenever the script is loaded. I know _init will run whenever an instance is constructed and _ready will run when it's added to the scene tree. I want to run code before either of those events happens; when the preload or load that brings it into memory first happens.
In Java, this would be a static initializer block, like so
public class Example {
public static ArrayList<Integer> example = new ArrayList<Integer>;
static {
// Pretend this is a complicated algorithm to compute the example array ...
example.add(1);
example.add(2);
example.add(3);
}
}
I want to do the same thing with a GDScript file in Godot, where some top-level constant or other configuration data is calculated when the script is first loaded, something like (I realize this is not valid GDScript)
extends Node
class_name Example
const EXAMPLE = []
static:
EXAMPLE.push_back(1)
EXAMPLE.push_back(2)
EXAMPLE.push_back(3)
Is there some way to get a feature like this or even to approximate it?
Background Information: The reason I care (aside from a general academic interest in problems like these) is that I'm working on a scripting language that uses GDScript as a compile target, and in my language there are certain initialization and registration procedures that need to happen to keep my language's runtime happy, and this needs to happen whenever a new script from my language is loaded.
In reality the thing I want to do is basically going to be
static:
MyLanguageRuntime.register_class_file(this_file, ...)
where MyLanguageRuntime is a singleton node that exists at the top of the scene tree to keep everything running smoothly.

Cucumber JVM : avoid dependency injection by Picocontainer for features not tagged for execution

Assuming that I have a Cucumber feature tagged #api
#api
Feature: BankID authentication
Scenario Outline: Successful authentication of user using BankID
Given the initial flow start URL 
When I enter that "<url>" into my browser
...
and steps for execution as below:
public class ApiSteps implements En {
public ApiSteps (ABCinjected abcInjected) {
Given("^the initial flow start URL $", () -> {});
When("^I enter that \"([^\"]*)\" into my browser$", abcInjected::navigateToAuthenticationPage);
...
}
Even if I define this feature not to be executed by specifying different Cucumber tags or explicitly specifying tags = {"not #api"} , although steps themselves are not executed per se, Picocontainer still creates and injects instance of ABCinjected class, which is undesirable. Is it possible to control this behavior? I assume if feature is tagged as not to be executed and related scenario/steps are ignored, consecutively DI should not happen.
I got response from Cucumber contributor on Github:
When using lamda steddefs the class has to be instantiated to register
the steps. And we would need to know the steps defined by the class to
determine if we should instantiate it. This is a dead lock of
requirements.
Another recommendation is to set different packages for steps (api, units etc) and set different glue on runtime.
https://github.com/cucumber/cucumber-jvm/issues/1672

What is the difference between Override Attribute Initializers and Set Run State in Enterprise Architect and why does it behave differently?

this is my first question on SO, so please exercise some kindness on my path to phrasing perfect questions.
On my current project I try to model deployments in EA v14.0 where I want components to be deployed on execution environments and additionally set them to some values.
However depending on how I deploy (as an Deployment Artifact or as a Component Instance) I get different behaviours. On Deployment Artifacts I am offered to Override Attribute Initializers. On a Component Instance I am offered to Set Run State. When I try to set an attribute on the DeploymentArtifact I get an error message that there is no initialiser to override. When I try to set the run state on the Component Instance I can set a value. However, then I get an UML validation error message, that I must not link a component instance to an execution environment:
MVR050002 - error ( (Deployment)): Deployment is not legal for Instance: Component1 --> ExecutionEnvironment1
This is how I started. I created a component with a deployment specification:
I then created a deployment diagram to deploy my component: Once as a Deployment Artifact and once as a Component Instance.
When I try to Override Attribute Initializers , I get the error message DeploymentArtifact has no attribute initializers to override`.
When I try to Set Run State I can actually enter values .
However, when I then validate my package, I get the aforementioned error message.
Can anyone explain what I am doing wrong or how this is supposed to work?
Thank you very much for your help!
Actually there are multiple questions here.
Your 2nd diagram is invalid (and probably EA should have moaned here already since it does so in V12).
You can deploy an artifact on a node instance and use a deployment spec as association class like shown on p. 654 of the UML 2.5 specs:
But you can not deploy something on something abstract. You will need instances - on both sides.
You can silence EA about warnings by turning off strict connector checking in the options:
To answer your question in the title: Override initializers looks into the attribute list of the classifier of an object and offers the so you can set any run states (that is values of attributes at runtime). Moreover the Set Run State allows to set arbitrary key value pairs which are not classifier attributes. This is to express e.g. RAM size in Nodes or things like that.

Output resources using Groovy ASTTransformer

I've written a number of Java annotation processors that write some arbitrary data to text files that will be included in my class directory / jar file. I typically use code that looks like this:
final OutputStream out = processingEnv
.getFiler()
.createResource(StandardLocation.CLASS_OUTPUT, "", "myFile")
.openOutputStream();
I'm trying to do something similar in a groovy ASTTransformation. I've tried adding a new source file but that (expectedly) must be valid groovy. How do I write arbitrary resources from an ASTTransformation? Is it even possible?
As part of implementing your ASTTransformation, you need to implement the void visit(ASTNode[] nodes, SourceUnit source) method. In it you can call source.getConfiguration().getTargetDirectory() and it will return your build output directory, e.g. /Users/skissane/my-groovy-project/build/classes/groovy/main). You can then write your resources into there, and whatever is packaging them into the JAR (such as Gradle) should pull them from that.
In my case, I wanted to delay writing the resources until OUTPUT phase – since I was creating META-INF/services files, and I wanted to wait until I'd seen all the annotated classes before writing them, or else I'd be repeatedly adding to them for each annotated class – so I also implemented CompilationUnitAware, and then in my setCompilationUnit method I call unit.addPhaseOperation() and pass it a method reference to run during OUTPUT. Note, if you are using a local ASTTransformation, setCompilationUnit will be called multiple times (each time on a new instance of your transformation class); to avoid adding the phase operation repeatedly, I used a map in a static field to track if I'd seen this CompilationUnit before or not. My addPhaseOperation method is called once per an output class, so I used a boolean field to make sure I only wrote the resource files out once.
Doing this caused a warning to be printed:
> Task :compileGroovy
warning: Implicitly compiled files were not subject to annotation processing.
Use -implicit to specify a policy for implicit compilation.
1 warning
Adding this to build.gradle made the warning go away:
compileGroovy {
options.compilerArgs += ['-implicit:none']
}

Resources