Import Cucumber results to Jira/Xray - groovy

I am able to run the Cucumber/groovy (with maven/pom.xml) test locally. I'm trying to import the test results (results.jon) generated by cucumber to Jira with Xray.
I'm unable to find the steps or procedures required for this. The only source found: https://confluence.xpand-it.com/display/public/XRAY/Import+Execution+Results
but not applicable to my project. There is no CICD yet at the moment. Is there any way to generate results/reports imported to Jira every time I run a test/multiple tests?

For Gherkin-based frameworks, such as Cucumber, you can't simply submit the results. This is because Xray needs to have the Gherkin phrases, which cannot be inferred from the results file.
So you need to choose one of the possible cucumber flows. You need to either choose Xray as the master for the edition of Cucumber scenarios, or you need to use Git/SVN for that and then synch them to Xray.
These steps are detailed in the previous link.
You can see some useful tutorials here.
There are some cucumber specific tutorials, such as this one, but they're not fully detailed. You can see a more technically detailed tutorial for Serenity BDD that make those steps more visible, for the two different flows (you'll need then to adapt it to cucumber specifics, but the principles are the same).

Related

Cucumber - UI and API implementation of the same scenario

One thing that I really like about behave ( https://behave.readthedocs.io/en/stable/ ) is that you can use the stage flag and it will run different step implementations for each one. If you pass the flag --stage=ui, then all step implementations inside ui_steps will run.
I don't want to be stuck with behave, but I didn't see this feature in other runner ( like cucumber.js or even cucumber for java)
Any ideia on how to implement this?
I believe this is possible in cucumberjs. You can pass the location of step deps for cucumber runner. If you have step definitions in separate folders for api and ui tests, you can change your configuration accordingly in your npm script or configuration of the automation tool being used.
You can have two sets of support code and specify which to use via the CLI with --require. Like many things this is easier to manage using profiles.
Aslak (the creator of Cucumber) has a good talk where he is doing something similar to this, using different support code against the same features and steps to test different parts of the stack:
https://www.youtube.com/watch?v=sUclXYMDI94

Integration Testing and Load Testing : using the same scenarii (JVM)

At the moment, I'm using two different frameworks for REST APIs integration testing, and load/stress testing. Respectively : geb (or cucumber) and gatling. But most of the time, I'm re-writing some pieces of code in load / performance scenarii that I've been writing for integration testing.
So the question is : is there a framework (running on the JVM) or simply a way, to write integration tests (for a strict REST API use case), preferably programmatically, then assemble load testing scenarios using these integration tests.
I've read cucumber maybe could do that, but I'm lacking a proper example.
The requirements :
write integration tests programmatically
for any integration test, have the ability to "extract" values (the same way gatling can extract json paths for instance)
assemble the integration tests in a load test scenario
If anyone has some experience to share, I'd be happy to read any blog article, GitHub repository, or whatever source dealing with such an approach.
Thanks in advance for your help.
It sounds like you want to extract a library that you use both for your integration tests as well as your load test.
Both tools you are referring to are able to use external jar.
Suppose that you use Maven or Gradle as build tool, create a new module that you refer to from both your integration tests and your load tests. Place all interaction logic in this new module. This should allow you to reuse the code you need.

How to keep gherkin files in sync with automated tests in specflow or other BDD gherkin/cucumber frameworks

So I'm confused about the process/steps to keep automated tests in sync with gherkin/feature files in specflow. Assuming the feature files are written in gherkin and checked into git source control.
I see that there is a tool to generate stub automated tests from a gherkin file, and that flows naturally into letting a developer implement those tests.
My question is if the features and spec change, what is the workflow for refactoring or updating those tests to keep it in sync? Is it done manually by the developer, or does specflow or other BDD driven tools have something to help you manage the refactoring of the test files?
There is no tool support that will update the steps when the wanted behavior changes.
The steps that are used for automating the specifications has to be maintained manually in the same way as the steps was implemented when they where new.
Anyone capable of implementing the code used in the automation has to do it. This may be a developer, a tester or someone else with sufficient knowledge about the domain and programming.

Tests statistics (number of tests by type, time spent to run by type)

In the current project we are using TeamCity as a CI platform and we have a bunch of projects and builds up and running.
The next step in our process is to track some statistics around our tests. So we are looking for a tool that could help us to get this numbers and make them visible for each build.
In the first place we want to keep track of the following numbers:
Number of unit tests
Number of specflow tests tagged as #ui
Number of specflow tests tagged as #controller
And also time spent running each of the test categories above.
Some details about the current scenario:
.net projects
nUnit for the unit tests
SpecFlow for functional tests categorized as #controller and #ui
rake for the build scripts
TeamCity as a CI Server.
I'm looking for tools and/or practices suggestions to help us to track those numbers.
The issue here is your requirements for tags. SpecFlow/NUnit/TeamCity/DotCover integration is already developed enough to do everything that you require, except for the tagging.
I'm wondering how much of a mix you expect to have between UI and Controller tests. Assuming that you are correctly seperating your domains (see Dan North - Whose domain is it anyway) then you should never get scenarios tagged with these two tags in the same feature. So then I assume its just a case of separating the UI features from the functionality (controller) features.
I've recently started separating my features in just this way, by adding Namespace folders in my tests assembly, mirroring how you would separate Models, ViewModels and Views (etc), and TeamCity is definitely clever enough to report coverage and each stage of the drill down through assemblies and namespaces.

How to get cucumber to run the same steps against Selenium and a headless browser

I've been doing some work testing web applications with Cucumber and I currently have a number of steps set up to run with Culerity. This works well, but there are times when it would be nice to run the exact same stories in Selenium.
I see two possible approaches that may work:
Writing each step so that it performs the step appropriately depending on the value of some global variable.
Having separate step definition files and somehow selectively including the correct one.
What is the preferred method for accomplishing this?
Third option: See if Culerity implements the Webrat API. Its README file says: "Culerity lets you (...) reuse existing Webrat-Style step definitions". Couldn't find much more than that though. Ideally, you would be able to switch backends with a config option or command-line argument without having to touch the step definitions.
Of course this would only work if you're not testing Javascript, which Culerity supports, but Webrat doesn't.
HI, have you looked at Capybara? It will allow you to use a variety of web drivers, and will allow you to test javascript-related features as well.
I think this is the one you are looking for. http://robots.thoughtbot.com/post/1658763359/thoughtbot-and-the-holy-grail
You can schedule the tests to run in Jenkins. Local machine Jenkins software is open source. You can get cucumber plugin in Jenkins so that you can achieve reporting part to your project on top of continuous test run

Resources