How to get current retry count in cucumber - cucumber

i was using #ExtendedCucumberOptions in my runner class where i can give retry count, if my test Scenario fails i was try to re run it according the count which i have given in this runner file but here i wants to know in middle of my Scenario i need to get or know what is the retry count value
Runner class as below and i was using mkolisnyk dependency in pom :
#ExtendedCucumberOptions(
jsonReport = "target/81/cucumber.json",
retryCount = 3)
#CucumberOptions(
features = { "src/test/java/com/github/mkolisnyk/cucumber/features/Test.feature" },
tags = { "#failed" },
glue = "com/github/mkolisnyk/cucumber/steps", plugin = {
"html:target/81", "json:target/81/cucumber.json",
"pretty:target/81/cucumber-pretty.txt",
"usage:target/81/cucumber-usage.json", "junit:target/81/cucumber-results.xml" })
public static class SampleTestRetry {
}
eg: here i have kept retryCount as 3 right i have started executing one of the Scenario in my suite for the 1st time it got failed and started retrying so i middle of before class i wants to fetch the count of retry right now ..so i need help on this i was not able to fetch the retryCount in the middle ot my test cases so please help me on this .

Related

How to Enable/Disable Particular Assertion in all Test Case using Groovy?

I have suite to run the regression Test Case in Soap UI. It has a Assertion Capture Response which measures the time of each request. This is needed on demand.
If metrics needed then I need to Enable the Capture Response Time assertion and if it is not needed then I don't need the capture response time.
I have written the below code to check that whether it is disabled or not. If it is disabled then OK else i need to disable it.
The following code returns
java.lang.NullPointerException: Cannot get property 'disabled' on null object.
Can any one help on this?
def project = testRunner.getTestCase().getTestSuite().getProject().getWorkspace().getProjectByName("Regression");
//Loop through each Test Suite in the project
for(suite in project.getTestSuiteList())
{
//log.info(suite.name)
//Loop through each Test Case
if(suite.name == "ReusableComponent")
{
for(tcase in suite.getTestCaseList())
{
log.info(tcase.name)
for(tstep in tcase.getTestStepList())
{
stepName = tstep.name
suiteName=suite.name
caseName=tcase.name
def testStep = testRunner.testCase.testSuite.project.testSuites["$suiteName"].testCases["$caseName"].getTestStepByName("$stepName")
log.info(testStep.getAssertionByName("CaptureResponseTime").disabled)
}
}
}
}
Below statement is causing NullPointerException:
log.info(testStep.getAssertionByName("CaptureResponseTime").disabled)
In order to avoid NPE, then change it to:
log.info(testStep.getAssertionByName("CaptureResponseTime").isDisabled)
If you need to disable the assertion, then use below statement:
testStep.getAssertionByName("CaptureResponseTime")?.disabled = true
Another input:
In order to get the project, do not use workspace.
Instead use:
def project = context.testCase.testSuite.project

Cucumber JUNIT Step Execution Ignored

I have been desperately trying to resolve a Cucumber Junit step execution.
I just followed a simple example of defining a feature, test and steps as below:
Feature: Campaign Budget Calculation
Scenario: Valid Input Parameters
Given campaign budget as 100 and campaign amount spent as 120
When the campaign budget is less than campaign amount spent
Then throw an Error
Test:
#RunWith(Cucumber.class)
#Cucumber.Options(glue = { "com.reachlocal.opt.dbas" })
public class CampaignTest {
}
Steps:
public class CampaignTestStepDefinitions {
private Campaign campaign;
#Given("^a campaign with (\\d+) of budget and (\\d+) of amount spent$")
public void createCampaign(int arg1, int arg2) throws Throwable{
CurrencyUnit usd = CurrencyUnit.of("USD");
campaign = new Campaign();
campaign.setCampaignBudget(Money.of(usd, arg1));
campaign.setCampaignAmountSpent(Money.of(usd, arg2));
}
#When("^compare the budget and the amount spent$")
public void checkCampaignBudget() throws Throwable{
if (campaign.getCampaignBudget().isLessThan(campaign.getCampaignAmountSpent())) {
campaign.setExceptionFlag(new Boolean(false));
}
}
#Then("^check campaign exception$")
public void checkCampaignException() throws Throwable{
if (campaign.getExceptionFlag()) {
assertEquals(new Boolean(true), campaign.getExceptionFlag());
}
}
}
When I run the junit, the steps are skipped, and the results shows they are all ignored. I have tried without glue as well before but does not help.
Not sure why. Simple Example code from the Internet like adding 2 numbers are working fine.
I'm running it in Maven/Spring project using STS.
The #Given, #When and #Then expressions don't match the feature file. They're regular expressions that need to match lines in the feature file.
For example, for the feature line :
Given campaign budget as 100 and campaign amount spent as 120
in the steps file you've got :
#Given("^a campaign with (\d+) of budget and (\d+) of amount
spent$")
but it should be :
#Given("^campaign budget as (\d+) and campaign amount spent as
(\d+)$")
Then it should match and not ignore the step.
Just had the same issue, it's easy to miss in Eclipse because you still get a green tick though it does say they were ignored.
Try this,
import org.junit.runner.RunWith;
import cucumber.api.junit.Cucumber;
#RunWith(Cucumber.class)
#Cucumber.Options(format = { "json:target/REPORT_NAME.json", "pretty",
"html:target/HTML_REPORT_NAME" }, features = { "src/test/resources/PATH_TO_FEATURE_FILE/NAME_OF_FEATURE.feature" })
public class Run_Cukes_Test {
}
This always has been working for me.
I also got same error and it turns out that all the methods under step definitions were private instead of public

How to Report Results to Sauce Labs using Geb/Spock?

I want to use the Sauce Labs Java REST API to send Pass/Fail status back to the Sauce Labs dashboard. I am using Geb+Spock, and my Gradle build creates a test results directory where results are output in XML. My problem is that the results XML file doesn't seem to be generated until after the Spock specification's cleanupSpec() exits. This causes my code to report the results of the previous test run, rather than the current one. Clearly not what I want!
Is there some way to get to the results from within cleanupSpec() without relying on the XML? Or a way to get the results to file earlier? Or some alternative that will be much better than either of those?
Some code:
In build.gradle, I specify the testResultsDir. This is where the XML file is written after the Spock specifications exit:
drivers.each { driver ->
task "${driver}Test"(type: Test) {
cleanTest
systemProperty "geb.env", driver
testResultsDir = file("$buildDir/test-results/${driver}")
systemProperty "proj.test.resultsDir", testResultsDir
}
}
Here is the setupSpec() and cleanupSpec() in my LoginSpec class:
class LoginSpec extends GebSpec {
#Shared def SauceREST client = new SauceREST("redactedName", "redactedKey")
#Shared def sauceJobID
#Shared def allSpecsPass = true
def setupSpec() {
sauceJobID = driver.getSessionId().toString()
}
def cleanupSpec() {
def String specResultsDir = System.getProperty("proj.test.resultsDir") ?: "./build/test-results"
def String specResultsFile = this.getClass().getName()
def String specResultsXML = "${specResultsDir}/TEST-${specResultsFile}.xml"
def testsuiteResults = new XmlSlurper().parse( new File( specResultsXML ))
// read error and failure counts from the XML
def errors = testsuiteResults.#errors.text()?.toInteger()
def failures = testsuiteResults.#failures.text()?.toInteger()
if ( (errors + failures) > 0 ) { allSpecsPass = false }
if ( allSpecsPass ) {
client.jobPassed(sauceJobID)
} else {
client.jobFailed(sauceJobID)
}
}
}
The rest of this class contains login specifications that do not interact with SauceLabs. When I read the XML, it turns out that it was written at the end of the previous LoginSpec run. I need a way to get to the values of the current run.
Thanks!
Test reports are generated after a Specification has finished execution and the generation is performed by the build system, so in your case by Gradle. Spock has no knowledge of that so you are unable to get that information from within the test.
You can on the other hand quite easily get that information from Gradle. Test task has two methods that might be of interest to you here: addTestListener() and afterSuite(). It seems that the cleaner solution here would be to use the first method, implement a test listener and put your logic in afterSuite() of the listener (and not the task configuration). You would probably need to put that listener implementation in buildSrc as it looks like you have a dependency on SauceREST and you would need to build and compile your listener class before being able to use it as an argument to addTestListener() in build.gradle of your project.
Following on from erdi's suggestion, I've created a Sauce Gradle helper library, which provides a Test Listener that parses the test XML output and invokes the Sauce REST API to set the pass/fail status.
The library can be included by adding the following to your build.gradle file:
import com.saucelabs.gradle.SauceListener
buildscript {
repositories {
mavenCentral()
maven {
url "https://repository-saucelabs.forge.cloudbees.com/release"
}
}
dependencies {
classpath group: 'com.saucelabs', name: 'saucerest', version: '1.0.2'
classpath group: 'com.saucelabs', name: 'sauce_java_common', version: '1.0.14'
classpath group: 'com.saucelabs.gradle', name: 'sauce-gradle-plugin', version: '0.0.1'
}
}
gradle.addListener(new SauceListener("YOUR_SAUCE_USERNAME", "YOUR_SAUCE_ACCESS_KEY"))
You will also need to output the Selenium session id for each test, so that the SauceListener can associate the Sauce Job with the pass/fail status. To do this, include the following output in the stdout:
SauceOnDemandSessionID=SELENIUM_SESSION_ID

How to rerun the failed scenarios using Cucumber?

I'm using Cucumber for my tests. How do I rerun only the failed tests?
Run Cucumber with rerun formatter:
cucumber -f rerun --out rerun.txt
It will output locations of all failed scenarios to this file.
Then you can rerun them by using
cucumber #rerun.txt
Here is my simple and neat solution.
Step 1: Write your cucumber java file as mentioned below with rerun:target/rerun.txt. Cucumber writes the failed scenarios line numbers in rerun.txt as shown below.
features/MyScenaios.feature:25
features/MyScenaios.feature:45
Later we can use this file in Step 2. Name this file as MyScenarioTests.java. This is your main file to run your tagged scenarios. If your scenarios has failed test cases, MyScenarioTests.java will write/mark them rerun.txt under target directory.
#RunWith(Cucumber.class)
#CucumberOptions(
monochrome = true,
features = "classpath:features",
plugin = {"pretty", "html:target/cucumber-reports",
"json:target/cucumber.json",
"rerun:target/rerun.txt"} //Creates a text file with failed scenarios
,tags = "#mytag"
)
public class MyScenarioTests {
}
Step 2: Create another scenario file as shown below. Let's say this as FailedScenarios.java. Whenever you notice any failed scenario run this file. This file will use target/rerun.txt as an input for running the scenarios.
#RunWith(Cucumber.class)
#CucumberOptions(
monochrome = true,
features = "#target/rerun.txt", //Cucumber picks the failed scenarios from this file
format = {"pretty", "html:target/site/cucumber-pretty",
"json:target/cucumber.json"}
)
public class FailedScenarios {
}
Everytime if you notice any failed scenarios run the file in Step 2
task cucumber() {
dependsOn assemble, compileTestJava
doLast {
javaexec {
main = "io.cucumber.core.cli.Main"
classpath = configurations.cucumberRuntime + sourceSets.main.output + sourceSets.test.output
args = ['--plugin', 'json:target/cucumber-reports/json/cucumber.json',
'--plugin', "rerun:target/rerun.txt",
'--glue', 'steps',
'src/test/resources']
}
}
}
task cucumberRerunFailed() {
doLast {
javaexec {
main = "io.cucumber.core.cli.Main"
classpath = configurations.cucumberRuntime + sourceSets.main.output + sourceSets.test.output
args = ['--plugin', 'json:target/cucumber-reports/json/cucumber.json',
'#target/rerun.txt']
}
}
}
I know this is old, but I found my way here first and then later found a much more up to date answer (not the accepted, Cyril Duchon-Doris' answer):
https://stackoverflow.com/a/41174252/215789
Since cucumber 3.0 you can use --retry to specify how many times to retry a scenario that failed.
https://cucumber.io/blog/open-source/announcing-cucumber-ruby-3-0-0/
Just tack it on to your cucumber command:
cucumber ... --retry 2
You need at least version 1.2.0 in order to use the #target/rerun.txt new feature. After that just create a runner that runs at the end and uses this file. Also, if you are using Jenkins, you can put a tag on the random failures features so the build doesn't fails unless after being ran twice.

MbUnit Icarus self-destructs on this test

I'm trying to test a multi-threaded IO class using MbUnit. My goal is to have the test fixture constructor execute 3 times, once for each row on the class. Then, for each instance, execute the tests multiple times on parallell threads.
However, Icarus blows up with an 'index out of range' on TaskRunner. I can't get the full stack, it spawns message boxes too fast.
What am I doing wrong, or is this a bug in MbUnit/Gallio?
using System;
using System.Collections.Generic;
using System.Text;
using Gallio.Framework;
using MbUnit.Framework;
using MbUnit.Framework.ContractVerifiers;
using System.IO;
namespace ImageResizer.Plugins.DiskCache.Tests {
[TestFixture]
[Row(0,50,false)]
[Row(0,50,true)]
[Row(8000,100,true)]
public class CustomDiskCacheTest {
public CustomDiskCacheTest(int subfolders, int totalFiles, bool hashModifiedDate) {
char c = System.IO.Path.DirectorySeparatorChar;
string folder = System.IO.Path.GetTempPath().TrimEnd(c) + c + System.IO.Path.GetRandomFileName();
cache = new CustomDiskCache(folder,subfolders,hashModifiedDate);
this.quantity = totalFiles;
for (int i = 0; i < quantity;i++){
cache.GetCachedFile(i.ToString(),"test",delegate(Stream s){
s.WriteByte(32); //Just one space
},defaultDate, 10);
}
}
int quantity;
CustomDiskCache cache = null;
DateTime defaultDate = new DateTime(2011, 1, 1);
[ThreadedRepeat(150)]
[Test(Order=1)]
public void TestAccess() {
CacheResult r =
cache.GetCachedFile(new Random().Next(0, quantity).ToString(), "test",
delegate(Stream s) { Assert.Fail("No files have been modified, this should not execute"); }, defaultDate, 100);
Assert.IsTrue(System.IO.File.Exists(r.PhysicalPath));
Assert.IsTrue(r.Result == CacheQueryResult.Hit);
}
volatile int seed = 0;
[Test (Order=2)]
[ThreadedRepeat(20)]
public void TestUpdate() {
//try to get a unique date time value
DateTime newTime = DateTime.UtcNow.AddDays(seed++);
CacheResult r =
cache.GetCachedFile(new Random().Next(0, quantity).ToString(), "test",
delegate(Stream s) {
s.WriteByte(32); //Just one space
}, newTime, 100);
Assert.AreEqual<DateTime>(newTime, System.IO.File.GetLastWriteTimeUtc(r.PhysicalPath));
Assert.IsTrue(r.Result == CacheQueryResult.Miss);
}
[Test(Order=3)]
public void TestClear() {
System.IO.Directory.Delete(cache.PhysicalCachePath, true);
}
}
}
I wont answer direct question about bug but I think following steps will help find the error and not get lost in popping message boxes
decrease numbers of totalfiles ,
subfolders to much lower values to
see if error persists in 2 or even 1
file counts
your tests code isn't
super easy, as it should be,
write tests for tests
so you know they are running
correct, maybe those random nexts
are the problem, maybe something
else, tests should be easy.
figure out what test breaks the
system, your code contains 3 tests,
and constructor, comment two other
tests and see which one produces
error
your threaded repeat 150
looks pretty sick, maybe try smaller
number like 2 or 3 if error is basic
even 2 threads might break, if you
run 150 threads I can understand
your trouble with message boxes
add logging and try catch - catch
that index exception and log your
class state carefully, after
inspecting it I think youll see
problem much more clearly.
Right now you cant figure out the problem I think, you got too many variables , not to mention you didnt provide code for your Cache class which might contain some simple error that is causing it, before MBunit features even begin to show up.

Resources