apps and plugin revisions, making an Origen app production ready - origen-sdk

Our development app, which has ~12 homegrown plugins and many Origen gems, needs to move from development to production level quality. I see many miscellaneous topics starting here but I don't see much on app/plugin versioning, Origen modes (debug and production) and items such as reference files and examples. I know there is the 'origen specs' command which runs rspec but what about 'origen test' and other related items.
thx

Yes documentation on this topic is a bit thin on the ground, here is some info to remedy that:
Runtime mode
Yes Origen does have the concept of debug and production modes - http://origen-sdk.org/origen/guides/runtime/mode/
Running in production mode does give you some rudimentary protection against local edits finding their way into production builds, but in reality I think there are many projects in production today which just use debug mode all the time.
I think the main benefit of production mode is that you can hook into it to implement domain specific checks that you care about. For example, in one of my apps I have a before_generate callback handler which checks that the product firmware we are using is the latest released version when we are building for production.
Another useful technique is to embed the mode in things like the program, pattern or test names generated by your application. That makes it very clear if someone has used a debug build for something that they want to release to production.
Versioning
Company internal plugins should be version tagged and released in the same way as Origen and the open source plugins are.
Normally everyone does that by running the origen rc tag command, which will tag the plugin, maintain a release history, and build and publish the gem to your gem server.
Plugins should generally be careful not to lock to specific versions of its dependencies in its .gemspec file, and should instead specify a minimum version. Something like this is good, which means any version 1 of the dependency that is greater than 1.2.3: '~>1', '>1.2.3'. See here for more on how to specify gem versions: http://guides.rubygems.org/patterns/#declaring-dependencies
Top-level applications on the other hand generally want to lock to specific versions of gems within their Gemfile so that their builds are reproducible. In theory you get that for free by checking in the Gemfile.lock file which means that when bundler builds the gem bundle in a new workspace it will use the exact versions specified in that file, even though the rules in the Gemfile may allow a range of possible versions.
In practice, many engineers prefer to take a stricter/more declarative approach by specifying absolute versions within their Gemfile.
Unit Testing
Unit testing is almost always done via the rspec tool and by convention launched via the origen specs command.
Unit tests are good because they can target very specific features and they are therefore relatively easy to debug when they fail.
These tests are often very useful for doing test driven development, particularly when creating new APIs. The technique is to use the act of writing to test to define the API you wish you had, then make the changes required to make the test pass.
The downside to these tests is that they take time to write and this often gets dropped when under time pressure.
There is also an art to writing them and sometimes it can be difficult for less experienced engineers to know how to write a test to target a specific feature.
For this reason, unit tests tend to be used more in plugins, particularly those which provide APIs to parent applications.
Top-level applications, particularly those concerned with generating patterns or test programs, tend to instead rely on diff-based testing...
Diff-based Testing
Diff-based or acceptance testing, means that someone blesses a particular output (i.e. a pattern or a test program file) as 'good' and then tests can be created which simply say that as long as the current output matches some previously known good output, then the test should pass.
If a diff is encountered, then the test will fail and an engineer has to review the changes and decide if it is unwanted and highlighting a genuine problem, or whether the change is acceptable/expected and in which case the new output should become the new known good reference.
The advantage to this style of testing is that it doesn't take any time to write the test and yet it can provide extremely high test coverage. The downside is that it can sometimes be hard to track down the source of an unwanted diff since the test covers a large surface area.
Another issue can be that un-wanted diffs can get lost in the noise if you make changes that globally affects all output. For that reason it is best to make such changes in isolation.
Many plugins and Origen itself implements this style of testing via a command called origen examples and the known good output is checked into the approved directory.
Some applications also implement a command called origen test which is simply a wrapper that combines origen specs and origen examples into a single command which is useful to create a combined test coverage analysis (more on that below).
You should refer to some of the open source repositories for examples of how to create these commands, but all new Origen application shells should come with example code commented out in config/commands.rb.
Checking in the approved output is OK when its footprint is relatively small, but for large production applications which may generate thousands of patterns or files and/or support many different targets, then it is not so practical to check all that in.
Also in that case, it sometimes becomes more useful to say "what has changed in the output compared to version X of the application?", rather than always comparing to the latest known good version.
You can manually run such tests by checking out a specific version of your application, generating the output and then use the origen save all command to locally save the approved output. Then checkout the latest version of your application, run the same generation command again and see if there are any changes.
That workflow can become tedious after a while and so Origen provides a helper for it via the Regression Manager: http://origen-sdk.org/origen/api/Origen/RegressionManager.html
It is common for application's to integrate that as a command called origen regression.
A complete implementation of and origen regression command from one of my apps is included below to give an example of how to create it.
In this case we are using Origen's LSF API to parallelize the commands, but if you don't use that just replace Origen.lsf.submit_origen_job cmd with system cmd to run your generation commands locally.
The regression manager will take care of replicating the same commands in the current and previous version of the application and providing you with the results of the diff.
Note that you can also supply build_reference: false if you want to re-run the regression against the same previous version of the app that you ran last time.
Test Coverage
Running any Origen command with the -c or --coverage switches will enable the generation of a test coverage report which you can review to see how well your test suite is doing.
Here is an example of a coverage report - http://origen-sdk.org/jtag/coverage/#_AllFiles
#commands/regression.rb
require 'optparse'
options = {}
default_targets = %w(product_a.rb product_b.rb)
short_targets = %w(product_a.rb)
opt_parser = OptionParser.new do |opts|
opts.banner = 'Usage: origen regression [options]'
opts.on('-t', '--target NAME1,NAME2,NAME3', Array, 'Override the default target, NAME can be a full path or a fragment of a target file name') { |t| options[:target] = t }
opts.on('-e', '--environment ENV', String, 'Override the default environment (tester).') { |e| options[:environment] = e }
opts.separator " (default targets: #{default_targets})"
opts.on('-s', '--short', 'Run a short regression (a single target)') { options[:short] = true }
opts.separator " (default short target: #{short_targets})"
opts.on('-f', '--full', 'Run a full regression (all production targets)') { options[:full] = true }
opts.on('-a', '--all', 'An alias for --full') { options[:full] = true }
opts.on('-c', '--ci', 'Build for bamboo CI') { options[:ci] = true } # avoids problematic targets for continuous integration
opts.on('-n', '--no_reference', 'Skip the reference build (careful!). Use when re-running the same regression back-back.') { options[:build_reference] = false }
opts.on('--email', 'Send yourself an email with the results when complete') { options[:send_email] = true }
opts.on('--email_all', 'Send the results email to all developers') { options[:email_all_developers] = true; options[:send_email] = true }
# Regression types-- saying no is easier to define the logic
opts.on('--no-patterns', 'Skip the vector-based patterns in the regression test') { options[:no_patterns] = true }
opts.on('--no-programs', 'Skip the programs in the regression test') { options[:no_programs] = true }
# Regression type only-- have to omit all other regression types
opts.on('--programs-only', 'Only do programs in the regression test') do
options[:no_patterns] = true
end
opts.separator ' (NOTE: must run program-based regression first to get pattern list prior to pattern regressions)'
opts.on('--patterns-only', 'Only do vector-based patterns in the regression test') do
options[:no_programs] = true
end
opts.on('-v', '--version type', String, 'Version for the reference workspace, latest, last, tag(ex: v1.0.0) or commit') { |v| options[:version] = v }
opts.on('--service_account', 'This option is set true only when running regressions through the Bamboo CI, a normal user should never have to use it') { options[:service_account] = true }
opts.on('--reference_workspace location', String, 'Reference workspace location') { |ref| options[:reference_workspace] = ref }
opts.separator ''
opts.on('-h', '--help', 'Show this message') { puts opts; exit }
end
opt_parser.parse! ARGV
if options[:version]
v = options[:version]
end
if options[:reference_workspace]
ref = options[:reference_workspace]
end
if options[:target]
t = options[:target]
t[0].sub!(/target\//, '') if t.length == 1 # remove path if there-- causes probs below
elsif options[:short]
t = short_targets
elsif options[:full]
t = Origen.target.production_targets.flatten
else
t = default_targets
end
if options[:environment]
e = options[:environment]
e.sub!(/environment\//, '') # remove path if there-- causes probs below
else
e = 'v93k.rb' # default environment
end
options[:target] = t
options[:environment] = e
def highlight(msg)
Origen.log.info '######################################################'
Origen.log.info msg
Origen.log.info '######################################################'
end
# Required to put the reference workspace in debug mode since the regression.rb file is modified,
# in future Origen should take care of this
Origen.environment.temporary = "#{options[:environment]}"
Origen.regression_manager.run(options) do |options|
unless options[:no_programs]
highlight 'Generating test programs...'
Origen.target.loop(options) do |options|
cmd = "program program/full.list -t #{options[:target]} --list #{options[:target]}.list -o #{Origen.root}/output/#{Origen.target.name} --environment #{options[:environment]} --mode debug --regression"
Origen.lsf.submit_origen_job cmd
end
highlight 'Waiting for test programs to complete...'
Origen.lsf.wait_for_completion
end
unless options[:no_patterns]
highlight 'Generating test patterns...'
Origen.target.loop(options) do |options|
# Generate the patterns required for the test program
Origen.file_handler.expand_list("#{options[:target]}.list").each do |pattern|
Origen.lsf.submit_origen_job "generate #{pattern} -t #{options[:target]} -o #{Origen.root}/output/#{Origen.target.name} --environment #{options[:environment]} --regression"
end
end
end
end

Related

Declarative Pipeline using env var as choice parameter value

Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).

Best approach to Ignore Cucumber feature files?

I have a large suite of feature files, and every single scenario is tagged #regression.
After running full regression I realized that some features do not need to be run for the current environment.
What is the best approach to ignore specific scenarios keeping in mind that each scenario is tagged with #regression?
You can use Tags to run certain features/scenarios, or not run them.
To specifically ignore them, see Ignoring a subset of scenarios:
"You can tell Cucumber to ignore scenarios with a particular tag:
Using JUnit runner class:
#CucumberOptions(tags = "not #smoke")
public class RunCucumberTest {}
"

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

How do I write a SCons script with hard-to-predict dynamic sources?

I'm trying to set up a build system involving a code generator. The exact files generated are unknown until after the generator is run, but I'd like to be able to run further build steps by pattern matching (run some program on all files with some extension). Is this possible?
Some of the answers here involving code generation seem to assume that the output is known or a listing of generated files is created. This isn't impossible in my case, but I'd like to avoid it since it makes things more complicated.
https://bitbucket.org/scons/scons/wiki/DynamicSourceGenerator seems to indicate that it's possible to add additional targets during Builder actions, but while I could get the build to run and list the generated files, any build steps introduced don't run.
https://bitbucket.org/scons/scons/wiki/NonDeterministicDependencies uses Scanners to add build steps. I put a glob(...) in a scanner, and it succeeds in detecting the generated files, but the files are inexplicably deleted before it actually runs the dependent step.
Is this use case possible? And why is SCons deleting my generated files?
A toy example
source (the file referenced in SConscript)
An example generator, constructs 3 files (not easily known to the build system) and puts them in the argument folder
echo "echo 1" > $1/gen1.txt
echo "echo 2" > $1/gen2.txt
echo "echo 3" > $1/gen3.txt
SConstruct
Just sets up a variant_dir
SConscript('SConscript', variant_dir='build')
SConscript
The goal is for it to:
"Compile" the generator (in this toy example, just copies a file called 'source' and adds execute permissions
Run the "compiled" generator ('source' is a script that generates files)
Perform some operation on each of those generated files by extension. This example just runs the "compile" copy operation on them (for simplicity).
env = Environment()
env.Append(BUILDERS = {'ExampleCompiler' :
Builder(action=[Copy('$TARGET', '$SOURCE'),
Chmod('$TARGET', 0755)])})
generator = env.ExampleCompiler('generator', 'source')
env.Append(BUILDERS = {'GeneratorRun' :
Builder(action=[Mkdir('$TARGET'),
'$SOURCE $TARGET'])})
generated_dir = env.GeneratorRun(Dir('generated'), generator)
Everything's fine up to here, where all the targets are explicitly known to the build system ahead of time.
Attempting to use this block of code to glob over the generated files causes SCons to delete (!!) the generated files:
for generated in generated_dir[0].glob('*.txt'):
generated_run = env.ExampleCompiler(generated.abspath + '.sh', generated)
Attempting to use an action to update the build tree results in additional actions not being run:
def generated_scanner(target, source, env):
for generated in source[0].glob('*.txt'):
print "scanned " + generated.abspath
generated_target = env.ExampleCompiler(generated.abspath + '.sh', generated)
Alias('TopLevelAlias', generated_target)
env.Append(BUILDERS = {'GeneratedOperation' :
Builder(action=[generated_scanner])})
dummy = env.GeneratedOperation(generated_dir[0].File('#dummy'), generated_dir)
Alias('TopLevelAlias', dummy)
The Alias operations are suggested in above dynamic source generator guide, but don't seem to do anything. The prints do execute and indicate that the action gets run.
Running some build pattern on special file extensions is possible with SCons. For C/CPP files this is the preferred scheme, for example:
env = Environment()
env.Program('main', Glob('*.cpp'))
The main task of SCons, as a build system, is to do the minimum amount of work such that all your targets are up-to-date. This makes things complicated for the use case you've described above, because it's not clear how you can reach a "stable" situation where no generated files are added and all targets are built.
You're probably better off by using a simple Python script directly...I really don't see how using SCons (or any other build system for that matter) is mission-critical in this case.
Edit:
At some point you have to tell SCons about the created files (*.txt in your example above), and for tracking all dependencies properly, the list of *.txt files has to be complete. This the task of the Emitter within SCons, which is responsible for returning the list of resulting target and source files for a Builder call. Note, that these files don't have to exist physically during the "parse" phase of SCons. Please also have a look at my answer to Scons: create late targets , which goes into some more detail.
Once you have a proper Emitter in place (see also https://bitbucket.org/scons/scons/wiki/ToolsForFools , "Using Emitters") you should be able to use the Glob('*.txt') call, which will detect and track your created files automatically.
Finally, on our page "Talks and Slides" ( https://bitbucket.org/scons/scons/wiki/TalksAndSlides ) you can find my talk from the PyCon FR.2014, "Why SCons is Not Slow", which explains shortly how SCons works internally. This might be helpful in understanding this problem better and coming up with a full solution.

Guard and Cucumber: when I edit a step definition I'd like to only run features that implement this step

I have read the topic Guardfile for running single cucumber feature in subdirectory?, and this works great: when I change a feature, only this will be run by guard.
But in the other direction it doesn't work: when I edit any step definition file, always all features are run, whether they are using any of the steps in the step definition file, or not.
This is not nice. I'd like to have at least only those features to be run which use any of the steps in the edited file; but even better would be if guard could see which step currently is edited, and then only runs the features that use this specific step.
The first shouldn't be that hard to accomplish, I guess; the second rather seems wishfu thinking...
To master Guard and have the perfect setup for your projects and own needs, you have to change the Guardfile and configure your watchers accordingly. The templates that comes with each Guard plugin try to match the most useful behavior for most users, which might differ from your personal preferences.
Each Guard plugin starts with the guard DSL method, followed by an options hash to configure the Guard plugin. The options are often different for different Guard plugins and you have to consult the plugin README for more information.
In between the guard block do ... end you normally configure your watchers. A watcher must be defined with a RegExp, which describe the files to be watched. I use Rubular to test my watchers and you can paste your current features copied from the output from find features to have real files to test your RegExp.
The line
watch(%r{features/.+\.feature})
for example watches for all files in the features folder that ends with .feature. Since there is no block provided to the watcher, the matched file is passed unmodified to Guard::Cucumber for running.
The watcher
watch(%r{features/support/.+}) { 'features' }
matches all files in the features/support directory and because the block always returns features, every time a file within the support directory changes, features is passed to Guard::Cucumber and thus all features are exectued.
The last line
watch(%r{features/step_definitions/(.+)_steps\.rb}) do |m|
Dir[File.join("**/#{m[1]}.feature")][0] || 'features'
end
watches for every file that ends with _steps.rb in the features/step_definitions dierctory and tries to match a feature for the step definition. Please notice the parenthesis in the RegExp features/step_definitions/(.+)_steps\.rb. This defines a match group, that is available later in your watcher block. For example, a step definition features/step_definitions/user_steps.rb will match and the first match group (m[1]) will contain the value user.
Now we try to find a matching file in all subdirectories (**) that is named user.feature. If this is the case then run the first matching file ([0]) or if you do not find anything, then run all features.
So it looks like you've named your steps different from what the default Guard::Cucucmber Guardfile is expecting, which is totally fine. Just change the watcher to match your naming convention.

Resources