I use JUnit5 and Cucumber in my tests. When running the tests in parallel, everything works as I want, but when it comes to the Scenario outline, the examples create additional threads. That is, if I set cucumber.execution.parallel.config.fixed.parallelism=4, the scenarios will run in 4 threads, but when they get to the Examples scenario, 1 additional thread is created for each example. How can I run a parallelism of exactly the scenarios and not the feature files? Or make the feature with scenario outline run one by one?
My junit-platform.properties
cucumber.publish.quiet=true
cucumber.execution.parallel.enabled=true
cucumber.execution.parallel.config.strategy=fixed
cucumber.execution.parallel.config.fixed.parallelism=4
A scenario outline is not a single scenario. It is multiple scenarios written in a compact form. When an outline is processed by cucumber it generates a standard scenario for each set of examples. Each example is going to run in its own thread.
Related
I asked in my previous question if Karate is capable of executing tests on specific data sets (For instance, based on priority p0,p1) given in a csv file.
Now my second question is if Karate is capable of executing tests on specific data sets in a csv file in parallel?
Example: DataProvider supports data-provider-thread-count. Here's an example of usage.
I've read the documentation in regards to parallel execution in Karate, however I did not find anything on this type of parallel feature. Can you please let me know if this is possible in Karate. Thank you.
Yes if you use a Scenario Outline each row will run in parallel. And this applies to even the "Dynamic" Scenario Outline as explained here: https://github.com/intuit/karate#dynamic-scenario-outline
Karate runs each Scenario in parallel and behind the scenes, each Examples row is turned into a Scenario. A few paragraphs below it is mentioned in the docs: https://intuit.github.io/karate/#parallel-stats
Extending the question Cleanup steps for Cucumber scenarios. I am aware that I can use tagged #After hooks to repeat the last few steps for all scenarios matching the tag. However, this implementation will be in my java classes and my business users will have no idea. Also my acceptance tests are huge, around 200. Lets say each feature file contains 10 scenarios and last 3-4 steps are common for all of them in that feature file. So i will have 20 feature file and 20 unique tags. I can create 20 #After hooks function and silently perform those steps. But how will my business owners know this if they cannot see these technical implementation?
The purpose of 'Background' tag is to repeat the same steps at beginning of the scenarios. We could have easily achieved this by using tagged #Before hooks, then why Background tag? If we have new feature of having 'Postground' tag, which is opposite of 'Background' tag, above problem can be solved. What do you think?
Note: I have logged an issue for this, but it got closed by #aslakhellosoy. I think I did not articulate the problem statement well.
Instead of repeating the same steps one by one, you can extract helper methods to perform the individual actions those steps perform, and call those helper methods either one by one in individual steps, or in sequence from the overarching step.
That way you can still make visible to the business users what happens, without having to spell out all the individual steps.
For more information, check the Cucumber documentation on Helper Methods.
If you still have more questions (I realise the documentation on Helper Methods isn't very extensive), please join the Cucumber Slack.
For reporting in to TestRail on automated BDD (cucumber-jvm) runs are using the Jenkins test rail plugin https://github.com/jenkinsci/testrail-plugin and we are getting false positives for test cases from scenario outlines.
The default implementation logs scenario outline example executions as multiple executions of the same test case in the same run. If the last example to run passed then the test case is passed for the run, even if all other examples actually failed.
Has anyone experienced this behaviour and did you find a way to change it so if any fail then the test case is failed or to list each example execution as a different test case?
I would report this behaviour to the authors of the plugin. The behaviour you describe is clearly very wrong.
I have a gherkin scenario similar to following:
Scenario Outline: Test some behaviour
Given a set of preconditions
When an event occurs
Then my application has to behave in a particular manner
And respond as expected
When I execute this scenario my report says
0 Scenarios, 0 steps executed.
How ever when I execute a scenario with Examples, my setup works fine.
Am I missing something?
Scenario Outline is specific to Examples. If you swap to just Scenario you should be fine.
I am reading a lot about Gherkin, and I had already read that it was not good to repeat steps, and for this it is necessary to use the keyword "Background", but in the example of this page they are repeating the same "Given" again and again, Could it be that I am doing wrong? I need to know your opinion about it:
Like with several things, this a topic that will generate different opinions. On this particular example I would have moved the "Given that I select the post" to the Background section as this seems to be a pre-requisite to all scenarios on this feature. Of course this would leave the scenarios in the feature without an actual Given section but those would be incorporated from the Background section on execution.
I have also seen cases where sometimes the decision of moving steps to the Background is a trade-off between having more or less feature files and how these are structured. For example, if there are 10 scenarios for a particular feature with a lot of similar steps between them - but there are 1 or 2 scenarios which do not require a particular step, then those 1 or 2 scenarios would have to moved into a new feature file in order to have the exact same steps on the Background section of the original feature.
Of course it is correct to keep the scenarios like this. From a tester's perspective, the Scenarios/Test cases should run independently, therefore, you can keep these tests separately for each functionality.
But in case you are doing an integration testing, then some of these test cases can be merged, thus you can cover multiple test cases in one scenario.
And the "given" statement is repeating, therefore you can put that in the background, so you don't have to call it in each scenarios.
Note: These separate scenarios will be handy when you run the scripts separately with annotation tags, when you just have to check for a specific functionality, or a bug fix.