Guard and Cucumber: when I edit a step definition I'd like to only run features that implement this step - cucumber

I have read the topic Guardfile for running single cucumber feature in subdirectory?, and this works great: when I change a feature, only this will be run by guard.
But in the other direction it doesn't work: when I edit any step definition file, always all features are run, whether they are using any of the steps in the step definition file, or not.
This is not nice. I'd like to have at least only those features to be run which use any of the steps in the edited file; but even better would be if guard could see which step currently is edited, and then only runs the features that use this specific step.
The first shouldn't be that hard to accomplish, I guess; the second rather seems wishfu thinking...

To master Guard and have the perfect setup for your projects and own needs, you have to change the Guardfile and configure your watchers accordingly. The templates that comes with each Guard plugin try to match the most useful behavior for most users, which might differ from your personal preferences.
Each Guard plugin starts with the guard DSL method, followed by an options hash to configure the Guard plugin. The options are often different for different Guard plugins and you have to consult the plugin README for more information.
In between the guard block do ... end you normally configure your watchers. A watcher must be defined with a RegExp, which describe the files to be watched. I use Rubular to test my watchers and you can paste your current features copied from the output from find features to have real files to test your RegExp.
The line
watch(%r{features/.+\.feature})
for example watches for all files in the features folder that ends with .feature. Since there is no block provided to the watcher, the matched file is passed unmodified to Guard::Cucumber for running.
The watcher
watch(%r{features/support/.+}) { 'features' }
matches all files in the features/support directory and because the block always returns features, every time a file within the support directory changes, features is passed to Guard::Cucumber and thus all features are exectued.
The last line
watch(%r{features/step_definitions/(.+)_steps\.rb}) do |m|
Dir[File.join("**/#{m[1]}.feature")][0] || 'features'
end
watches for every file that ends with _steps.rb in the features/step_definitions dierctory and tries to match a feature for the step definition. Please notice the parenthesis in the RegExp features/step_definitions/(.+)_steps\.rb. This defines a match group, that is available later in your watcher block. For example, a step definition features/step_definitions/user_steps.rb will match and the first match group (m[1]) will contain the value user.
Now we try to find a matching file in all subdirectories (**) that is named user.feature. If this is the case then run the first matching file ([0]) or if you do not find anything, then run all features.
So it looks like you've named your steps different from what the default Guard::Cucucmber Guardfile is expecting, which is totally fine. Just change the watcher to match your naming convention.

Related

Declarative Pipeline using env var as choice parameter value

Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).

PTC Integrity batch update member revision

Is there a way to update the member revision of a big list of files via command line?
I can't use :working or :head but have to specify a different revision for each file.
As far as I know --selectionFile only takes paths as input, but not the revision numbers.
edit: I wanted to set member a very big list of files and I wanted to avoid writing the command si updaterevision ... for every file, as it takes ages to complete for that many files. Instead I wanted to know if there is a more advanced method to specify a list of files and their revisions to be able to run the updaterevision only once (like it is with :working) for the whole list of files.
But as it is said in the comment there is no such possibility.
edit2: I use MKS for a couple of years now and as I now know, there is no such possibility (at least up to MKS 11.6) to update many files to different revisions with one single command line call. But using one call per member, as was proposed, made the whole operation take up to several hours as I had many thousands of members in the sandbox and MKS needs some time to complete each sicommand.
Some time already passed since you asked for this question, here is my comment in case it could still be useful for you in the future.
First, It is not completely clear what you want to achieve. Please be more descriptive and if possible provide example.
What I understand as of now is you need to set bunch of files listed as member revision thru the command line. This is fairly simple, the most complicated is actually to have the list of files to be updated to member and the revision that you want to set as member.
I recommend you to create a batch file with the commands to make each file member. You can use Regex to do it very quick and without much trouble.
Here is an example for updating one file member revision:
si updaterevision --hostname=servername --port=portnumber --user=username --changepackageid=5873763:2 --revision=:working myfile_a1.c
where
servername = the name of the server where your sandbox is located
portnumber = the port that provides access to the server for your sandbox
username = your login user id
changepackageid = here you change the number to use your defined TASK:ChangePackage for this changes
revision = if you have a working revision that you want now to become member, just use "working" as revision, otherwise you can define specific revision number, e.g. revision=1.2
At the end you define the name of the file you want to update.
Go to you sandbox root folder, open CMD window, and run the batch file. It will execute each line applying your changes.
If you have a list of files with the revision you want as member, you can use REGEX to convert it into a batch file.
Example list of files in text file:
file1.c 1.10
file3.c 1.19
sec_file1.c 1.1.2.1
support.h 1.7
Use notepad++ or other text editor with regex support and run this search:
Once you know which regex apply, you can now use it in the notepad++ to do a simple search and replace:
Search = ([\w].[\D])\s+([\d.]+).*
Replace = si updaterevision --hostname=servername --port=portnum --user=userid --changepackageid=6123933:4 --revision=\2 \1
\1 => FileName
\2 => File revision
See image below as example:
Finally just save doc as batch file and run it.
Just speculating that if you have a large list of members along with the member revision you want to update to, then you also have an sandbox that served you to generate this list.
If so my approach would be
c:\MySandbox> si updaterevision --recurse --revision=:working
If your member/revision list come from a development path you could first have a sandbox targeting that devpath, resync, (close thesandbox if opened in gui), retarget the sandbox to the destination devpath (or mainline) you want and then issue the command above.
For an single member approach I would use 'si rlog' to generate a list of si-commands directly
si rlog -R --noheaderformat --notrailerformat --revision=:working --format="si updaterevision {membername} --revision={revision}\r\n" > updaterevs.bat.txt
Review updaterevs.bat.txt rename it to updaterevs.bat and ecxecute it.
(Be careful if using it on other sandboxes)
Other interesting readings here might be the "snapshot sandbox" feature,
checkpointing in general and variants rsp. devpaths.
Using only these features might be politically more correct in the philosophy of Integrity.

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

Error of "encountered a second time" by find.pm

everyone,
when I deploy my package to a linux environment, I met this error:
.../Linux-2.6c2.5-i686/Ncurses/Ncurses-15766.0-0/lib/libncurses.so.5 is encountered a second time at /apollo/_env/FBAMerchantAutoRemovalDaemon-swit1na.1755067.237551097.1107633519/perl/lib/perl5.8-dist/File/Find.pm line 542.
though I read the perl script, I have no idea what is wrong. I suspect my environment is tainted. Does anyone have idea what is wrong and how can I debug this problem? Thanks a lot in advance!
Zhe
From perldoc File::Find
follow
Causes symbolic links to be followed. Since directory trees with symbolic links (followed) may contain files more than once and may even have cycles, a hash has to be built up with an entry for each file. This might be expensive both in space and time for a large directory tree. See "follow_fast" and "follow_skip" below. If either follow or follow_fast is in effect:
It is guaranteed that an lstat has been called before the user's wanted() function is called. This enables fast file checks involving _. Note that this guarantee no longer holds if follow or follow_fast are not set.
There is a variable $File::Find::fullname which holds the absolute pathname of the file with all symbolic links resolved. If the link is a dangling symbolic link, then fullname will be set to undef.
So, if, for the purposes of your application, if it is OK to follow symlinks, invoke find with the follow option set:
find({ wanted => \&process, follow => 1 }, $dir);
Or, consider if one of the other follow_skip behaviors is more appropriate for your application:
follow_skip
follow_skip==1, which is the default, causes all files which are neither directories nor symbolic links to be ignored if they are about to be processed a second time. If a directory or a symbolic link are about to be processed a second time, File::Find dies.
follow_skip==0 causes File::Find to die if any file is about to be processed a second time.
follow_skip==2 causes File::Find to ignore any duplicate files and directories but to proceed normally otherwise.
It may be that follow_skip => 2 is more appropriate for your application. Only you can make that decision.

Can gradle do substitutions as it copies resources?

For a group of developers, all the differences are stored in a normal property file:
token1=some value
token2=9000
etc.
The 'tokens' are used in a series of XML files that reside in the normal src/main/resources directory. When Gradle copies these files into the build directory (and I don't know for sure what task that is), is there any opportunity to execute custom code? Specifically, I would like to have the token values from the property file substituted into the copy. Thus, the original copy remains untouched, but the version in the runtime has the desired values for the given developer.
Finally, I know this can done brute force with two or three steps that change the file after it is copied. I really want to know if there is an elegant way to do this in a single step.
After compilation, Gradle calls processResources task that copies the resources into the build directory. While copying resources, processResources can be configured to do the filtering (or possibly execute custom code by adding a doLast):
processResources {
filter org.apache.tools.ant.filters.ReplaceTokens, tokens: [
...
]
}
These two links can provide more help:
http://java.dzone.com/articles/resource-filtering-gradle
http://mrhaki.blogspot.in/2010/11/gradle-goodness-add-filtering-to.html

Resources