I have three different feature files which hold different test scenarios: configA.feature, configB.feature, common.feature.
In the steps file I have two tagged before hooks, one for each config (A/B):
#Before("#ConfigA")
public void configA() {
//some settings
}
#Before("#ConfigB")
public void configB() {
//some settings
}
These two configs are mutually exclusive, so only one should be executed for any particular scenario execution as the second would overwrite setting done by the first.
What I want to achieve is to be able to run scenarios as follows:
configA.feature with the ConfigA hook executed
configB.feature with the ConfigB hook executed
common.feature with the ConfigA hook executed
common.feature with the ConfigB hook executed
I've tried annotating the features in the feature files as:
configA with #ConfigA
configB with #ConfigB
common with both #ConfigA #ConfigB
but this results in common.feature being always executed with both of the before hooks at the same time.
As I'm using a JUnit wrapper with Cucumber runner I've also tried creating separate test classes with #CucumberOptions.tags specified, but this didn't work for me either.
Is, what I'm trying to do, even possible with cucumber? If so then how can I achieve this?
Related
I want to have two different scenarios in the same feature.
The thing is that Scenario 1 needs to be executed before Scenario 2. I have seen that this can be achieved through cucumber Hooks but when digging in the explanations, there's no concrete cucumber implementation in the examples I have found.
How can I get Scenario 1 executed before Scenario 2?
The feature file is like this:
#Events #InsertExhPlan #DelExhPln
Feature: Insert an Exh Plan and then delete it
#InsertExhPlan
Scenario: Add a new ExhPlan
Given I login as admin
And I go to automated test
When I go to ExhPlan section
And Insert a new exh plan
Then The exh plan is listed
#DeleteExhPlan
Scenario: Delete an Exh Plan
Given I login as admin
And Open the automatized tests edition
When I go to the exh plan section
And The new exh plan is deleted
Then The new exhibitor plan is deleted
The Hooks file is:
package com.barrabes.utilities;
import cucumber.api.java.After;
import cucumber.api.java.Before;
import static com.aura.steps.rest.ParentRestStep.logger;
public class Hooks {
#Before(order=1)
public void beforeScenario(){
logger.info("================This will run before every Scenario================");
}
#Before(order=0)
public void beforeScenarioStart(){
logger.info("-----------------Start of Scenario-----------------");
}
#After(order=0)
public void afterScenarioFinish(){
logger.info("-----------------End of Scenario-----------------");
}
#After(order=1)
public void afterScenario(){
logger.info("================This will run after every Scenario================");
}
}
The order is now as it should be but I don't see how does the Hooks file control exection order.
You don't use Hooks for that purpose. Hooks are used for code that you need to run before and/or after tests, and/or before and/of after test suites; not to control the order of features and/or scenarios.
Cucumber scenarios are executed top to bottom. For the example you showed there, Scenario: Add a new ExhPlan will execute before Scenario: Delete an Exh Plan if you pass the tag #Events in the test runner. Also, you should not have the scenario tags at the feature level. So, you should remove #InsertExhPlan and #DelExhPln at the Feature level. Alternatively, you could pass a comma-separated list of scenario tags to the test runner in the order you want. For example, if you need to run scenario 2 before scenario 1, you would pass the tags for the corresponding scenarios in the order you wish them to be executed. Moreover, you can do this as well from your CI environment. For example, you can have Jenkins jobs that execute the tasks in a specific order by passing the scenario tags in that order. And, if you wish to be run in the default order, simply you can pass the feature tag.
About Hooks, this should be for code that needs to be run for all features and scenarios. For specific stuff you need to run for a particular feature, you need to use Background in the Cucumber file. Background block is run before each scenario in a given feature file.
Is it possible to use some kind of #Before annotation ?
I want to 'pre-load' datas (POST) before to launch my tests (GET).
But I only want parallel executions on the GET.
I was thinking to define a method with #LoadWith("preload_generation.properties") with :
number.of.threads=1
ramp.up.period.in.seconds=1
loop.count=1
Just to be sure that we execute it only once.
But it looks like I cannot choose the order of execution, and I need this POST method to be the first one executed.
I also tried to put a TestMappings with my 'loading method' at the top of the class.
But it doesn't work neither.
I am not aware of any way that ZeroCode would be able to do this as it is specific to only re-leveraging tests already written in JUnit. My suggestion would be to follow a bit more traditional approach and use standard JUnit setup methods
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
#Before
public void setup() {
// setup before each individual test
}
rather than attempting to use a tool outside of its intended purposes.
As per your described scenario above that you want to ensure data is loaded before your tests are executed, especially in the case of being run under load by ZeroCode it is suggested that you determine how to create your data using the
#BeforeClass
public static void setupClass() {
// setup before the entire class
}
While this may take a bit more thought into how you create your data by creating it before all the tests it will ensure that your load test is excluding data setup time.
There are several 'After' hooks and one of them should be first than others, how it could be configured in the Cucumber JS?
You can explicitly configure hooks to run in a certain order:
#Before(order = 10) // Annotated method
public void doSomething(){
// Do something before each scenario
}
Before(10, () -> { // Lambda
// Do something before each scenario
});
It seems this also works for #After hooks.
Edit: Leaving this in case it's useful for any Java people - missed that it was JS, sorry! But for Javascript:
Hooks are executed in the order in which they're defined. If that doesn't do it for you, create one hook and explicitly call the other methods.
Ordered by top to bottom in the hooks file
After({}, async function() {
return "This will run first";
});
After({}, async function() {
return "tThis will run second";
});
You can use tags concept for each scenario in cucumber feature file and use same tag on each #After annotation which will resolve for which test scenario's that specific #After should execute
Hooks is a file which will be executed from top to bottom. If you have more After tags, go in such a way that which you want to close first of all will be first and the last what you want to execute to be last. If you have tags in your feature file, pass that information to the specific After
I want to change the scenario status for known issues into the After hook.
Something like:
#After
public void afterScenario(Scenario scenario) {
if(scenario.isFailed() && scenario.getSourceTagNames().contains("knownIssue")){
//scenario.add(Result.SKIPPED)
}
}
The idea is tests, which fail because of known bug to be skipped into the test report.
Thanks,
Nayden
You can annotate the scenario with #KnownIssue and then run cucumber with --tags "not #KnownIssue" or its #CucumberOptions equivalent.
it is your test execution engine that is responsible for this aspect, aka TestNG, JUnit.
So the question is: How to change Test Execution Status Programmatically? Here is one explained. You can hook in to the Test Execution Engine either by the way explained in the artcicle or from your method - what is important - you have to work with the Test Execution Engine and not Cucumber.
Gherkin with qaf supports what you are looking for. With qaf
you can achieve it by test ng after method invocation listener.
In addition you also can add meta-data at step level and handle it in onFailure method of step listener to modify exception to Skip exception depending on step meta data. Which automatically skips tests where that step is called and failed.
I have heard of this term many times (in the context of programming) but couldn't find any explanation of what it meant. Any good articles or explanations?
I think you're referring to test fixtures:
The purpose of a test fixture is to ensure that there is a well known
and fixed environment in which tests are run so that results are
repeatable. Some people call this the test context.
Examples of fixtures:
Loading a database with a specific, known set of data
Erasing a hard disk and installing a known clean operating system installation
Copying a specific known set of files
Preparation of input data and set-up/creation of fake or mock objects
(source: wikipedia, see link above)
Here are also some practical examples from the documentation of the 'Google Test' framework.
The term fixture varies based on context, programing language or framework.
1. A known state against which a test is running
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
PHP-Unit documentation
A test fixture (also known as a test context) is the set of
preconditions or state needed to run a test. The developer should set
up a known good state before the tests, and return to the original
state after the tests.
Wikipedia (xUnit)
2. A file containing sample data
Fixtures is a fancy word for sample data. Fixtures allow you to
populate your testing database with predefined data before your tests
run. Fixtures are database independent and written in YAML. There is
one file per model.
RubyOnRails.org
3. A process that sets up a required state.
A software test fixture sets up the system for the testing process by
providing it with all the necessary code to initialize it, thereby
satisfying whatever preconditions there may be. An example could be
loading up a database with known parameters from a customer site
before running your test.
Wikipedia
I think PHP-unit tests have very good explaining of this:
One of the most time-consuming parts of writing tests is writing the
code to set the world up in a known state and then return it to its
original state when the test is complete. This known state is called
the fixture of the test.
Also Yii documents described fixtures test in a good shape:
Automated tests need to be executed many times. To ensure the testing
process is repeatable, we would like to run the tests in some known
state called fixture. For example, to test the post creation feature
in a blog application, each time when we run the tests, the tables
storing relevant data about posts (e.g. the Post table, the Comment
table) should be restored to some fixed state.
Here's a simple example of fixtures test:
<?php
use PHPUnit\Framework\TestCase;
class StackTest extends TestCase
{
protected $stack;
protected function setUp()
{
$this->stack = [];
}
protected function tearDown()
{
$this->stack = [];
}
public function testEmpty()
{
$this->assertTrue(empty($this->stack));
}
public function testPush()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', $this->stack[count($this->stack)-1]);
$this->assertFalse(empty($this->stack));
}
public function testPop()
{
array_push($this->stack, 'foo');
$this->assertEquals('foo', array_pop($this->stack));
$this->assertTrue(empty($this->stack));
}
}
?>
This PHP unit test has functions with names setUp and tearDown so that before running your tests you setup your data and once finished you can restore them to the initial state.
Exactly to that topic, JUnit has a well explained doc. Here is the link!
The related portion of the article is:
Tests need to run against the background of a known set of objects. This set of objects is called a test fixture. When you are writing tests you will often find that you spend more time writing the code to set up the fixture than you do in actually testing values.
To some extent, you can make writing the fixture code easier by paying careful attention to the constructors you write. However, a much bigger savings comes from sharing fixture code. Often, you will be able to use the same fixture for several different tests. Each case will send slightly different messages or parameters to the fixture and will check for different results.
When you have a common fixture, here is what you do:
Add a field for each part of the fixture
Annotate a method with #org.junit.Before and initialize the variables in that method
Annotate a method with #org.junit.After to release any permanent resources you allocated in setUp
For example, to write several test cases that want to work with different combinations of 12 Swiss Francs, 14 Swiss Francs, and 28 US Dollars, first create a fixture:
public class MoneyTest {
private Money f12CHF;
private Money f14CHF;
private Money f28USD;
#Before public void setUp() {
f12CHF= new Money(12, "CHF");
f14CHF= new Money(14, "CHF");
f28USD= new Money(28, "USD");
}
}
In Xamarin.UITest it is explained as following:
Typically, each Xamarin.UITest is written as a method that is referred
to as a test. The class which contains the test is known as a test
fixture. The test fixture contains either a single test or a logical
grouping of tests and is responsible for any setup to make the test
run and any cleanup that needs to be performed when the test finishes.
Each test should follow the Arrange-Act-Assert pattern:
Arrange – The test will setup conditions and initialize things so that the test can be actioned.
Act – The test will interact with the application, enter text, pushing buttons, and so on.
Assert – The test examines the results of the actions performed in the Act step to determine correctness. For example, the
application may verify that a particular error message is
displayed.
Link for original article of the above Excerpt
And within Xamarin.UITest code it looks like following:
using System;
using System.IO;
using System.Linq;
using NUnit.Framework;
using Xamarin.UITest;
using Xamarin.UITest.Queries;
namespace xamarin_stembureau_poc_tests
{
[TestFixture(Platform.Android)]
[TestFixture(Platform.iOS)]
public class TestLaunchScreen
{
IApp app;
Platform platform;
public Tests(Platform platform)
{
this.platform = platform;
}
[SetUp]
public void BeforeEachTest()
{
app = AppInitializer.StartApp(platform);
}
[Test]
public void AppLaunches()
{
app.Screenshot("First screen.");
}
[Test]
public void LaunchScreenAnimationWorks()
{
app.Screenshot("Launch screen animation works.");
}
}
}
Hope this might be helpful to someone who is in search of better understanding about Fixtures in Programming.
I'm writing this answer as quick note for myself on what is "fixture".
same-data-multiple-tests
Test Fixtures: Using the Same Data Configuration for Multiple Tests
If you find yourself writing two or more tests that operate on similar data, you can use a test fixture. This allows you to reuse the same configuration of objects for several different tests.
you can read more at googletest
fixtures can be used for during integration test or during development (lets say ui development where data is comming from development database
fake users for database or testing
myproject/fixtures/my_fake_user.json
[
{
"model": "myapp.person",
"pk": 1,
"fields": {
"first_name": "John",
"last_name": "Lennon"
}
},
{
"model": "myapp.person",
"pk": 2,
"fields": {
"first_name": "Paul",
"last_name": "McCartney"
}
}
]
you can read more from django docs