How to rerun the failed scenarios using Cucumber? - cucumber

I'm using Cucumber for my tests. How do I rerun only the failed tests?

Run Cucumber with rerun formatter:
cucumber -f rerun --out rerun.txt
It will output locations of all failed scenarios to this file.
Then you can rerun them by using
cucumber #rerun.txt

Here is my simple and neat solution.
Step 1: Write your cucumber java file as mentioned below with rerun:target/rerun.txt. Cucumber writes the failed scenarios line numbers in rerun.txt as shown below.
features/MyScenaios.feature:25
features/MyScenaios.feature:45
Later we can use this file in Step 2. Name this file as MyScenarioTests.java. This is your main file to run your tagged scenarios. If your scenarios has failed test cases, MyScenarioTests.java will write/mark them rerun.txt under target directory.
#RunWith(Cucumber.class)
#CucumberOptions(
monochrome = true,
features = "classpath:features",
plugin = {"pretty", "html:target/cucumber-reports",
"json:target/cucumber.json",
"rerun:target/rerun.txt"} //Creates a text file with failed scenarios
,tags = "#mytag"
)
public class MyScenarioTests {
}
Step 2: Create another scenario file as shown below. Let's say this as FailedScenarios.java. Whenever you notice any failed scenario run this file. This file will use target/rerun.txt as an input for running the scenarios.
#RunWith(Cucumber.class)
#CucumberOptions(
monochrome = true,
features = "#target/rerun.txt", //Cucumber picks the failed scenarios from this file
format = {"pretty", "html:target/site/cucumber-pretty",
"json:target/cucumber.json"}
)
public class FailedScenarios {
}
Everytime if you notice any failed scenarios run the file in Step 2

task cucumber() {
dependsOn assemble, compileTestJava
doLast {
javaexec {
main = "io.cucumber.core.cli.Main"
classpath = configurations.cucumberRuntime + sourceSets.main.output + sourceSets.test.output
args = ['--plugin', 'json:target/cucumber-reports/json/cucumber.json',
'--plugin', "rerun:target/rerun.txt",
'--glue', 'steps',
'src/test/resources']
}
}
}
task cucumberRerunFailed() {
doLast {
javaexec {
main = "io.cucumber.core.cli.Main"
classpath = configurations.cucumberRuntime + sourceSets.main.output + sourceSets.test.output
args = ['--plugin', 'json:target/cucumber-reports/json/cucumber.json',
'#target/rerun.txt']
}
}
}

I know this is old, but I found my way here first and then later found a much more up to date answer (not the accepted, Cyril Duchon-Doris' answer):
https://stackoverflow.com/a/41174252/215789
Since cucumber 3.0 you can use --retry to specify how many times to retry a scenario that failed.
https://cucumber.io/blog/open-source/announcing-cucumber-ruby-3-0-0/
Just tack it on to your cucumber command:
cucumber ... --retry 2

You need at least version 1.2.0 in order to use the #target/rerun.txt new feature. After that just create a runner that runs at the end and uses this file. Also, if you are using Jenkins, you can put a tag on the random failures features so the build doesn't fails unless after being ran twice.

Related

Calling a variadic function in a Jenkinsfile fails unpredictably

Context
I'm running Jenkins on Windows, writing declarative pipelines. I'm trying to put multiple commands in a single bat step, while still making the step fail if any of the included commands fail.
Purpose of this is twofold.
The best practices document suggests that creating a step for every little thing might not be the best idea (might also be solved by putting more stuff in batch files, but my builds aren't that big yet)
I want to execute some commands in a Visual Studio command prompt, which is achieved by first calling a .bat file that sets the environment, and then doing any necessary commands.
Code
I wrote the following Groovy code in my Jenkinsfile:
def ExecuteMultipleCmdSteps(String... steps)
{
bat ConcatenateMultipleCmdSteps(steps)
}
String ConcatenateMultipleCmdSteps(String... steps)
{
String[] commands = []
steps.each { commands +="echo ^> Now starting: ${it}"; commands += it; }
return commands.join(" && ")
}
The problem/question
I can't get this to work reliably. That is, in a single Jenkinsfile, I can have multiple calls to ExecuteMultipleCmdSteps(), and some will work as intended, while others will fail with java.lang.NoSuchMethodError: No such DSL method 'ExecuteMultipleCmdSteps' found among steps [addBadge, ...
I have not yet found any pattern in the failures. I thought it only failed when executing from within a warnError block, but now I also have a problem from within a dir() block, while in a different Jenkinsfile, that works fine.
This problem seems to be related to ExecuteMultipleCmdSteps() being a variadic function. If I provide an overload with the correct number of arguments, then that overload is used without problem.
I'm at a loss here. Your input would be most welcome.
Failed solution
At some point I thought it might be a scoping/importing thing, so I enclosed ExecuteMultipleCmdSteps() in a class (code below) as suggested by this answer. Now, the method is called as Helpers.ExecuteMultipleCmdSteps(), and that results in a org.jenkinsci.plugins.scriptsecurity.sandbox.RejectedAccessException: No such static method found: staticMethod Helpers ExecuteMultipleCmdSteps org.codehaus.groovy.runtime.GStringImpl org.codehaus.groovy.runtime.GStringImpl
public class Helpers {
public static environment
public static void ExecuteMultipleCmdSteps(String... steps)
{
environment.bat ConcatenateMultipleCmdSteps(steps)
}
public static String ConcatenateMultipleCmdSteps(String... steps)
{
String[] commands = []
steps.each { commands +="echo ^> Now starting: ${it}"; commands += it; }
return commands.join(" && ")
}
Minimal failing example
Consider the following:
hello = "Hello"
pipeline {
agent any
stages {
stage("Stage") {
steps {
SillyEcho("Hello")
SillyEcho("${hello}" as String)
SillyEcho("${hello}")
}
}
}
}
def SillyEcho(String... m)
{
echo m.join(" ")
}
I'd expect all calls to SillyEcho() to result in Hello being echoed. In reality, the first two succeed, and the last one results in java.lang.NoSuchMethodError: No such DSL method 'SillyEcho' found among steps [addBadge, addErrorBadge,...
Curiously succeeding example
Consider the following groovy script, pretty much equivalent to the failing example above:
hello = "Hello"
SillyEcho("Hello")
SillyEcho("${hello}" as String)
SillyEcho("${hello}")
def SillyEcho(String... m)
{
println m.join(" ")
}
When pasted into a Groovy Script console (for example the one provided by Jenkins), this succeeds (Hello is printed three times).
Even though I'd expect this example to succeed, I'd also expect it to behave consistently with the failing example, above, so I'm a bit torn on this one.
Thank you for adding the failing and succeeding examples.
I expect your issues are due to the incompatibility of String and GString.
With respect to the differences between running it as a pipeline job and running the script in the Jenkins Script Console, I assume based on this that the Jenkins Script Console is not as strict with type references or tries to cast parameters based upon the function signature. I base this assumption on this script, based upon your script:
hello = "Hello"
hello2 = "${hello}" as String
hello3 = "${hello}"
println hello.getClass()
println hello2.getClass()
println hello3.getClass()
SillyEcho(hello)
SillyEcho(hello2)
SillyEcho(hello3)
def SillyEcho(String... m)
{
println m.getClass()
}
This is the output I got in the Jenkins Script Console:
class java.lang.String
class java.lang.String
class org.codehaus.groovy.runtime.GStringImpl
class [Ljava.lang.String;
class [Ljava.lang.String;
class [Ljava.lang.String;
I expect the pipeline doesn't cast the GString to String but just fails as there is no function with the Gstring as parameter.
For debugging you could try to invoke .toString() an all elements you pass on to your function.
Update
This seems to be a known issue (or at least reported) with the pipeline interpreter: JENKINS-56758.
In the ticket an extra work-around has been described using collections instead of varargs. This would omit the need to type-cast everything.
Not sure if this will answer your question, if not, consider it as a bigger comment.
I like how you borrowed the 'variadic functions' from C++ :)
However, in groovy there is much elegant way how to deal with this.
Try this:
def ExecuteMultipleCmdSteps(steps)
{
sh steps
.collect { "echo \\> Now starting: $it && $it" }
.join(" && ")
}
pipeline {
agent any
stages {
stage ("test") {
steps {
ExecuteMultipleCmdSteps(["pwd", "pwd"])
}
}
}
}
which works just fine for me:
[Pipeline] Start of Pipeline
[Pipeline] node
Running on Jenkins in /var/lib/jenkins/workspace/TestJob
[Pipeline] {
[Pipeline] stage
[Pipeline] { (test)
[Pipeline] sh
+ echo > Now starting: pwd
> Now starting: pwd
+ pwd
/var/lib/jenkins/workspace/TestJob
+ echo > Now starting: pwd
> Now starting: pwd
+ pwd
/var/lib/jenkins/workspace/TestJob
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[Pipeline] // node
[Pipeline] End of Pipeline
Finished: SUCCESS
You may want to rewrite your function like this.
The 2 errors you mention may have different causes.
The fist one, "No such DSL method ..." is indeed a scoping one, you found yourself the solution, but I do not understand why the overload works in the same scope.
The second error, may be solved with this answer. However, for me your code from the second approach works also just fine.

In jenkins job, create file using system groovy in current workspace

my task is to collect node details and list them in certail format. I need to write data to a file and save it as csv file and attach it as artifacts.
But i am not able to create a file using groovy scripts in the jenkins using plugin "Execute System Groovy" as build step
import jenkins.model.Jenkins
import hudson.model.User
import hudson.security.Permission
import hudson.EnvVars
EnvVars envVars = build.getEnvironment(listener);
filename = envVars.get('WORKSPACE') + "\\node_details.txt";
//filename = "${manager.build.workspace.remote}" + "\\node_details.txt"
targetFile = new File(filename);
println "attempting to create file: $targetFile"
if (targetFile.createNewFile()) {
println "Successfully created file $targetFile"
} else {
println "Failed to create file $targetFile"
}
print "Deleting ${targetFile.getAbsolutePath()} : "
println targetFile.delete()
Output obtained
attempting to create file: /home/jenkins/server-name/workspace/GET_NODE_DETAILS\node_details.txt
FATAL: No such file or directory
java.io.IOException: No such file or directory
at java.io.UnixFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:947)
at java_io_File$createNewFile.call(Unknown Source)
at org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:42)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at Script1.run(Script1.groovy:13)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:682)
at groovy.lang.GroovyShell.evaluate(GroovyShell.java:666)
at hudson.plugins.groovy.SystemGroovy.perform(SystemGroovy.java:81)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:772)
at hudson.model.Build$BuildExecution.build(Build.java:199)
at hudson.model.Build$BuildExecution.doRun(Build.java:160)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:535)
at hudson.model.Run.execute(Run.java:1732)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:88)
at hudson.model.Executor.run(Executor.java:234)
Some time i see people use "manager" object, how can i get access to it ?
Alos any ideas on how to accomplish the task ?
Problem
Groovy system script is always run in jenkins master node, while the workspace is the file path in your jenkins slave node, which doesn't exist in your master node.
You can verify by the code
theDir = new File(envVars.get('WORKSPACE'))
println theDir.exists()
It will return false
If you don't use slave node, it will return true
Solution As we can't use normal File, we have to use FilePath http://javadoc.jenkins-ci.org/hudson/FilePath.html
if(build.workspace.isRemote())
{
channel = build.workspace.channel;
fp = new FilePath(channel, build.workspace.toString() + "/node_details.txt")
} else {
fp = new FilePath(new File(build.workspace.toString() + "/node_details.txt"))
}
if(fp != null)
{
fp.write("test data", null); //writing to file
}
Then it works in both case.
Answer by #Larry Cai covers one part to write a file to slave node from System Groovy Script (as it runs on Master Node).
The part I am answering is "Some time i see people use "manager" object, how can i get access to it "
This is the object already available in Post Build Groovy Script for accessing a lot of things like environment variables, Build Status, Build Display Name etc.
Quoted from https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin :
"The groovy script can use the variable manager, which provides various methods to decorate your builds.
Those methods can be classified into whitelisted methods and non-whitelisted methods."
To access it, we can directly call it in the post build groovy script. e.g
manager.build.setDescription("custom description")
manager.addShortText("add your message here")
All methods available on manager objects are documented here.
https://wiki.jenkins-ci.org/display/JENKINS/Groovy+Postbuild+Plugin#GroovyPostbuildPlugin-Whitelistedmethods
I suspect the error was caused by the path format, could you try below:
change
filename = envVars.get('WORKSPACE') + "\\node_details.txt";
to
filename = envVars.get('WORKSPACE') + "/node_details.txt";
Because when I tried in my local jenkins server, I get it successfully executed.
The manager object is not available depending on how the groovy is invoked. e.g. in "execute system groovy script".
You can find the BadgeManager class in jenkins GroovyPostBuild plugin API here: https://javadoc.jenkins.io/plugin/groovy-postbuild/org/jvnet/hudson/plugins/groovypostbuild/GroovyPostbuildRecorder.BadgeManager.html#addShortText-java.lang.String-
ANSWER: Import the GroovyPostBuild plugin and create a new manager object. e.g. here a job with "Execute System Groovy Script" create a manager object and call the addShortText method:
// java.lang.Object
// org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildRecorder.BadgeManager
// Constructor and Description
// BadgeManager(hudson.model.Run<?,?> build, hudson.model.TaskListener listener, hudson.model.Result scriptFailureResult)
import hudson.model.*
import org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction
def build = Thread.currentThread().executable
manager = new org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildRecorder.BadgeManager(build, null, null)
manager.addShortText("MANAGER TEST", "black", "limegreen", "0px", "white")
This question gives a hint: See here for a nearly working answer:
In jenkins job, create file using system groovy in current workspace
org.jvnet.hudson.plugins.groovypostbuild.GroovyPostbuildAction and build.getActions().add(GroovyPostbuildAction.createShortText(text, "black", "limegreen", "0px", "white"));

How to access a Gradle configurations object correctly

First off, this is my first foray into Gradle/Groovy (using Gradle 1.10). I'm setting up a multi-project environment where I'm creating a jar artifact in one project and then want to define an Exec task, in another project, which depends on the created jar. I'm setting it up something like this:
// This is from the jar building project
jar {
...
}
configurations {
loaderJar
}
dependencies {
loaderJar files(jar.archivePath)
...
}
// From the project which consumes the built jar
configurations {
loaderJar
}
dependencies {
loaderJar project(path: ":gfxd-demo-loader", configuration: "loaderJar")
}
// This is my test task
task foo << {
configurations.loaderJar.each { println it }
println configurations.loaderJar.collect { it }[0]
// The following line breaks!!!
println configurations.loaderJar[0]
}
When executing the foo task it fails with:
> Could not find method getAt() for arguments [0] on configuration ':loaderJar'.
In my foo task I'm just testing to see how to access the jar. So the question is, why does the very last println fail? if a Configuration object is a Collection/Iterable then surely I should be able to index into it?
Configuration is-a java.util.Iterable, but not a java.util.Collection. As can be seen in the Groovy GDK docs, the getAt method (which corresponds to the [] operator) is defined on Collection, but not on Iterable. Hence, you can't index into a Configuration.

Using Gradle execute for wix

I have a build.gradle that compiles my project, runs test creates a jar then packages that with launch4j. I want to be able to use wix to create a installer as well, however I seem to be having a lot of trouble launching it from .execute().
The files necessary for candle and light are held in \build\installer. However trying to access those files by calling execute in the build file is always met with failure.
I have made a second build.gradle in /build/installer that does work. It is:
task buildInstaller {
def command = project.rootDir.toString() + "//" +"LSML Setup.wxs"
def candleCommand = ['candle', command]
def candleProc = candleCommand.execute()
candleProc.waitFor()
def lightCommand = ['light' , '-ext', 'WixUIExtension', "LSML Setup.wixobj"]
def lightProc = lightCommand.execute()
}
Is there some way I can run the second build file from the main one and have it work or is there a way to call execute directly and have it work?
Thanks.
If your project consist of few gradle builds (gradle projects) you should use dependencies. Working with execute() method is a bad idea. I will do it in this way:
ROOT/candle/candle.gradle
task build(type: Exec) {
commandLine 'cmd', '/C', 'candle.exe', '...'
}
ROOT/app/build.gradle
task build(dependsOn: ':candle:build') {
println 'build candle'
}
ROOT/app/settings.gradle
include ':candle'
project(':candle').projectDir = "$rootDir/../candle" as File
BTW I had problems with Exec task so in my projects I replaced it with and.exec() so candle task may look like this:
task candle << {
def productWxsFile = new File(buildDir, "Product.wxs")
ant.exec(executable:candleExe, failonerror: false, resultproperty: 'candleRc') {
arg(value: '-out')
arg(value: buildDir.absolutePath+"\\")
arg(value: '-arch')
arg(value: 'x86')
arg(value: '-dInstallerDir='+installerDir)
arg(value: '-ext')
arg(value: wixHomeDir+"\\WixUtilExtension.dll")
arg(value: productWxsFile)
arg(value: dataWxsFile)
arg(value: '-v')
}
if (!ant.properties['candleRc'].equals('0')) {
throw new Exception('ant.exec failed rc: '+ant.properties['candleRc'])
}
}
More informations about multi projects you will find here http://www.gradle.org/docs/current/userguide/multi_project_builds.html.
The SetupBuilder plugin can do the job. It create the lauch4j launcher for your java application, signed it, create the msi file and signed it. You does not need to work with the complex WIX toolset syntax.

How to Report Results to Sauce Labs using Geb/Spock?

I want to use the Sauce Labs Java REST API to send Pass/Fail status back to the Sauce Labs dashboard. I am using Geb+Spock, and my Gradle build creates a test results directory where results are output in XML. My problem is that the results XML file doesn't seem to be generated until after the Spock specification's cleanupSpec() exits. This causes my code to report the results of the previous test run, rather than the current one. Clearly not what I want!
Is there some way to get to the results from within cleanupSpec() without relying on the XML? Or a way to get the results to file earlier? Or some alternative that will be much better than either of those?
Some code:
In build.gradle, I specify the testResultsDir. This is where the XML file is written after the Spock specifications exit:
drivers.each { driver ->
task "${driver}Test"(type: Test) {
cleanTest
systemProperty "geb.env", driver
testResultsDir = file("$buildDir/test-results/${driver}")
systemProperty "proj.test.resultsDir", testResultsDir
}
}
Here is the setupSpec() and cleanupSpec() in my LoginSpec class:
class LoginSpec extends GebSpec {
#Shared def SauceREST client = new SauceREST("redactedName", "redactedKey")
#Shared def sauceJobID
#Shared def allSpecsPass = true
def setupSpec() {
sauceJobID = driver.getSessionId().toString()
}
def cleanupSpec() {
def String specResultsDir = System.getProperty("proj.test.resultsDir") ?: "./build/test-results"
def String specResultsFile = this.getClass().getName()
def String specResultsXML = "${specResultsDir}/TEST-${specResultsFile}.xml"
def testsuiteResults = new XmlSlurper().parse( new File( specResultsXML ))
// read error and failure counts from the XML
def errors = testsuiteResults.#errors.text()?.toInteger()
def failures = testsuiteResults.#failures.text()?.toInteger()
if ( (errors + failures) > 0 ) { allSpecsPass = false }
if ( allSpecsPass ) {
client.jobPassed(sauceJobID)
} else {
client.jobFailed(sauceJobID)
}
}
}
The rest of this class contains login specifications that do not interact with SauceLabs. When I read the XML, it turns out that it was written at the end of the previous LoginSpec run. I need a way to get to the values of the current run.
Thanks!
Test reports are generated after a Specification has finished execution and the generation is performed by the build system, so in your case by Gradle. Spock has no knowledge of that so you are unable to get that information from within the test.
You can on the other hand quite easily get that information from Gradle. Test task has two methods that might be of interest to you here: addTestListener() and afterSuite(). It seems that the cleaner solution here would be to use the first method, implement a test listener and put your logic in afterSuite() of the listener (and not the task configuration). You would probably need to put that listener implementation in buildSrc as it looks like you have a dependency on SauceREST and you would need to build and compile your listener class before being able to use it as an argument to addTestListener() in build.gradle of your project.
Following on from erdi's suggestion, I've created a Sauce Gradle helper library, which provides a Test Listener that parses the test XML output and invokes the Sauce REST API to set the pass/fail status.
The library can be included by adding the following to your build.gradle file:
import com.saucelabs.gradle.SauceListener
buildscript {
repositories {
mavenCentral()
maven {
url "https://repository-saucelabs.forge.cloudbees.com/release"
}
}
dependencies {
classpath group: 'com.saucelabs', name: 'saucerest', version: '1.0.2'
classpath group: 'com.saucelabs', name: 'sauce_java_common', version: '1.0.14'
classpath group: 'com.saucelabs.gradle', name: 'sauce-gradle-plugin', version: '0.0.1'
}
}
gradle.addListener(new SauceListener("YOUR_SAUCE_USERNAME", "YOUR_SAUCE_ACCESS_KEY"))
You will also need to output the Selenium session id for each test, so that the SauceListener can associate the Sauce Job with the pass/fail status. To do this, include the following output in the stdout:
SauceOnDemandSessionID=SELENIUM_SESSION_ID

Resources