How to invoke whatever generator rule is configured for a concrete instance of an abstract concept? - mps

I have a collection of nodes of concept Command that I'm iterating over with a $LOOP$ macro. Command is an abstract concept. I have defined templates and reduction rules for concrete subconcepts, such as Outline:
template tpl_Outline
input Outline
...
and
reduction rules:
[concept Outline ] --> tpl_Outline
[inheritors false ]
[condition <always>]
Question: How would I invoke the appropriate generator rule for the concrete concept from inside the $LOOP$ macro where the nodes are only known to be of the abstract type Command?
[EDIT] Since the proposed answer is specific to looping over a collection of elements, how would I do the same when there's no looping? That is, how to trigger the configured rule for a given node (e.g. a certain child of the current node).
Note 1: I tried using just $LOOP$[null], hoping for the element nodes to be processed by appropriate rules automatically, but that just produced nulls in the output.
Note 2: I tried $LOOP[$COPY_SRC$[null]], but that produced
textgen error: 'No textgen for Draw.structure.Outline' in [actualArgument] Outline null[847086916111387210] in Draw.sandbox#0
[EDIT 2] This is actually a working solution. What helped was probably invalidating the caches (just Rebuild Project was not working).
Note 3: Previously I used a template switch to call an appropriate template based on concrete concept, but I now want to support custom extensions of Command so I can no longer create an exhaustive template switch.

Try using $COPY_SRCL$ (L stands for Loop here), this macro is designed exactly for your situation.
Also, template switches are extensible

Regarding your Build --> Rebuild Project problem: sometimes File --> Invalidate caches can help to resolve such problems.

Related

Declarative Pipeline using env var as choice parameter value

Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).

Terraform conditional source in MODULE

I am trying to set a module's source (this IS NOT a resource) based on a conditional trigger but it looks like the module is getting fired before the logic is applied:
module "my_module" {
source = "${var.my_field == "" ? var.standard_repo : var.custom_repo}"
stuff...
more stuff...
}
I have created the standard_repo and custom_repo vars as well and defined with URLs for respective repos (using git:: -- this all works w/o conditional)
All this being said, anyone know of a way to implement this conditional aspect? (again, this is a module and not a resource)
I tried using duplicate modules and calling based off the var value but this, too, does not work (condition is never met, even when it is):
repo = ["${var.my_field == "na" ? module.my_module_old : module.my_module_new}"]
One way to achieve this is described in this post
Basically, a common pattern is to have several folders for different environments such as dev/tst/prd. These environments often reuse large parts of the codebase. Some may be abstracted as modules, but there is still often a large common file which is either copy-pasted or symlinked.
The post offers a way that doesn't conditionally disable based on variables but it probably solves your issue of enabling a module based on different enviornments. It makes use of the override feature of terraform and adds a infra_override.tf file. Here, it defines a different source for the module which points to an empty directory. Voila, a disabled module.
Variables are not allowed to be used in the module source parameter. There also does not seem to be a plan for this to change. https://github.com/hashicorp/terraform/issues/1439 . Creating a wrapper script , or using something like mustache http://mustache.github.io/ seems to be the best way to solve the problem.

Convention for passing arguments to non-Silicon subblocks/helpers

Sorry if the title is a bit confusing, but what are the options/conventions that Origen provides for setting up subblocks that aren't necessarily silicon models, or are just general helpers?
For example, I have a scan helper plugin that guides the user through creating a scan test program. I'd like to add a list of options/customizations to the top-level app. There are a few ways to do this:
I can add a list of attr_readers/methods. I think this looks a bit ugly though and adds a bunch of stuff to the toplevel that isn't used by anything else, and it blows up $dut.methods.
I could use parameters as defined here: http://origen-sdk.org/origen/guides/models/parameters/ and just call of them in the scan tester app. But looking at the guides I don't think that is the desired use case. It looks more like context switching, but maybe that was just the example use case.
I could add a scan_tester.setup method or something on the toplevel. This just seems unnecessary though since its basically doing the same thing as #2, but requires a 'setup' method to be called. Yeah, its only 1 line, but if you mess up or forget to add that line then you've got some debug to do avoided by #2 (I can print a warning for example if the scan parameters aren't provided to help warn of typos, etc.).
I can set it up as a subblock (currently how I've got it), but this doesn't really fit. Scan isn't a silicon model, so base address is useless, but required. It has no registers, etc.
Then there's other 'Ruby' things I could do (setup via on_create, use global variable etc.) but these all seem not as great as any of the options above for one reason or another (mainly, more setup required on my part than using any of the existing options).
Any one of these would work. But from a convention standpoint, which direction should my scan tester setup go? Is there another option I hadn't considered? I'd lean towards option #2 as it looks the cleanest.
Thanks
This is a really good question.
There are actually two other options:
Add application config parameters from the plugin: http://origen-sdk.org/origen/release_notes/#v0_7_24
Define a constant as used by the JTAG and other early plugins: http://origen-sdk.org/jtag/#How_To_Use
I think #2 is using parameters in a way that was not originally intended, maybe it could work though but I just can't picture it.
I don't really like #5 or #6 since they provide application-level and class-level configuration, which is sometimes what you want, but often these days I see the need more for (DUT) instance-level configuration.
So, my best answer here is that I don't know, but you are touching on a good point that we need to have an official API or at least a recommendation for this.
I think you should be open to the possibility of adding something new to Origen for this if you can think of something better.
As I'm writing this, I suppose #5 would also support instance-level configuration, albeit a bit long-winded:
def initialize(options = {})
Origen.app.config.scan_chain_length = 6
end
My comment wouldn't keep its format, so here it is but looks better:
#Ginty
What would you think of a 'component' API. For example, we could have:
# components.rb
component(:scan, TIPScan::ScanTester,
# options
wgl_dir: ..., # defaults to Origen.app.root/pattern/wgl
custom_sort: proc do {|wgl_name| ...},
)
# then we can do things like:
$dut.scan #=> TIPScan instance
$dut.component(:scan) #=> same as above
$dut.components #=> [TIPScan instance, ...]
$dut.has_component(:scan) #=> true etc.
Pretty much just a stripped down subblock class to handle these. I think our IAR/C compilers and even CATI could benefit from this and make the setup cleaner and more customizable.

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

Guard and Cucumber: when I edit a step definition I'd like to only run features that implement this step

I have read the topic Guardfile for running single cucumber feature in subdirectory?, and this works great: when I change a feature, only this will be run by guard.
But in the other direction it doesn't work: when I edit any step definition file, always all features are run, whether they are using any of the steps in the step definition file, or not.
This is not nice. I'd like to have at least only those features to be run which use any of the steps in the edited file; but even better would be if guard could see which step currently is edited, and then only runs the features that use this specific step.
The first shouldn't be that hard to accomplish, I guess; the second rather seems wishfu thinking...
To master Guard and have the perfect setup for your projects and own needs, you have to change the Guardfile and configure your watchers accordingly. The templates that comes with each Guard plugin try to match the most useful behavior for most users, which might differ from your personal preferences.
Each Guard plugin starts with the guard DSL method, followed by an options hash to configure the Guard plugin. The options are often different for different Guard plugins and you have to consult the plugin README for more information.
In between the guard block do ... end you normally configure your watchers. A watcher must be defined with a RegExp, which describe the files to be watched. I use Rubular to test my watchers and you can paste your current features copied from the output from find features to have real files to test your RegExp.
The line
watch(%r{features/.+\.feature})
for example watches for all files in the features folder that ends with .feature. Since there is no block provided to the watcher, the matched file is passed unmodified to Guard::Cucumber for running.
The watcher
watch(%r{features/support/.+}) { 'features' }
matches all files in the features/support directory and because the block always returns features, every time a file within the support directory changes, features is passed to Guard::Cucumber and thus all features are exectued.
The last line
watch(%r{features/step_definitions/(.+)_steps\.rb}) do |m|
Dir[File.join("**/#{m[1]}.feature")][0] || 'features'
end
watches for every file that ends with _steps.rb in the features/step_definitions dierctory and tries to match a feature for the step definition. Please notice the parenthesis in the RegExp features/step_definitions/(.+)_steps\.rb. This defines a match group, that is available later in your watcher block. For example, a step definition features/step_definitions/user_steps.rb will match and the first match group (m[1]) will contain the value user.
Now we try to find a matching file in all subdirectories (**) that is named user.feature. If this is the case then run the first matching file ([0]) or if you do not find anything, then run all features.
So it looks like you've named your steps different from what the default Guard::Cucucmber Guardfile is expecting, which is totally fine. Just change the watcher to match your naming convention.

Resources