I would like to run all my features and then after all of them to re-run all the scenarios that failed and have certain tag (for example "rerun-on-fail" tag).
I can always parse the results of the first run and then run all the filtered scenarios manually, but I was wondering if there could be a way to dynamically add scenarios to the "queue" on runtime. Probably by making custom test runner. Although Cucumber runner class it final and overall cucumber code doesn't seem to be opened to extension much. Any ideas how to achieve this?
Edit: looks like there's interface FeatureSupplier, which looks pretty good for this.
Related
I’ve been tasked with investigating whether something can be added to our MR approval process to highlight whether proposed Merge Requests (MRs) in GitLab, which are C++ based code, contain any new unit tests or modifications to existing tests. The overall aim is to remind developers and approvers that they need to consider unit testing.
Ideally a small script would run and detect the presence of additional tests or changes (this I can easily write, and I accept that there’s a limit to how much can be done here) and display a warning on the MR if they weren’t detected.
An addition step, if at all possible, would be to block the MR until either, further commits were pushed that meet the criteria, or an (extra/custom) GitLab MR field is completed explaining why unit testing is not appropriate for this change. This field would be held with the MR for audit purposes. I accept that this is not foolproof but am hoping to pilot this as part of a bigger push for more unit test coverage.
As mentioned, I can easily write a script in, say, Python to check for unit tests in the commit(s), but what I don’t know is whether/how I can hook this into the GitLab MR process (I looked at web-hooks but they seem to focus on notifying other systems rather than being transactional) and whether GitLab is extensible enough for us to achieve the additional step above. Any thoughts? Can this be done, and if so, how would I go about it?
measuring the lack of unit tests
detect the presence of additional tests or changes
i think you are looking for the wrong thing here.
the fact that tests have changed or that there are any additional tests does not mean, that the MR contains any unit tests for the submitted code.
the underlying problem is of course a hard one.
a good approximation of what you want is typically to check how many lines of code are covered by the test-suite.
if the testsuite tests more LOCs after the MR than before, then the developer has done their homework and the testsuite has improved. if the coverage has grown smaller, than there is a problem.
of course it's still possible for a user to submit unit tests that are totally unrelated to their code changes, but at least the overall coverage has improved (or: if you already have a 100% coverage before the MR, then any MR that keeps the coverage at 100% and adds new code has obviously added unit tests for the new code).
finally, to come to your question
yes, it's possible to configure a gitlab-project to report the test-coverage change introduced by an MR.
https://docs.gitlab.com/ee/ci/pipelines/settings.html#test-coverage-parsing
you obviously need to create a coverage report from your unittest run.
how you do this depends on the unittesting framework you are using, but the gitlab documentation gives some hints.
You don't need a web hook or anything like that. This should be something you can more or less trivially solve with just an extra job in your .gitlab-ci.yml. Run your python script and have it exit nonzero if there are no new tests, ideally with an error message indicating that new tests are required. Now when MRs are posted your job will run and if there are no new tests the pipeline will fail.
If you want the pipeline to fail very fast, you can put this new job at the head of the pipeline so that nothing else runs if this one fails.
You will probably want to make it conditional so that it only runs as part of an MR, otherwise you might get false failures (e.g. if just running the pipeline against some arbitrary commit on a branch).
I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)
Requirement:
I have 2 testcases and will grow in future. I need a way to run these 2 testcase in multiple environment parallel at runtime.
So I can either make multiple copies of these testcase for multiple environment and add them to empty testsuite and set to run them parallel. All these using groovy script.
Or try a way to run each testcase parallel by some code.
I tried tcase.run(properties,async)
but did not work.
Need help.
Thank you.
This question does not show any research effort; it is unclear and not useful. You are mixing together unrelated things.
If you have a non-Pro installation, then you can parameterize the endpoints. This is accomplished by editing all your endpoints with a SoapUI property, and passing these to your test run. This is explained in the official documentation.
If you have a -Pro license, then you have access to the Environments feature, which essentially wraps the above for you in a convenient manner. Again: consult official documentation.
Then a separate question is how to run these in parallel. That will very much depend on what you have available. In the simplest case, you can create a shell script that calls testrunner the appropriate number of times with appropriate arguments. Official documentation is available. There are also options to run from Maven - official documentation - in which case you can use any kind of CI to run these.
I do not understand how Groovy would play into all this, unless you would like to get really fancy and run all this from junit, which also has official documentation available.
If you need additional information, you could read through SO official documentation and perhaps clarify your answer.
I'm using cucumber for testing my application. I have to set up large data for a feature and clean up after FEATURE is complete. After doing some research over web, I found out there are hooks only for scenarios but not for before and after hooks for features.
Also, I found that cucumber notifies a formatter on its execution life cycle.
So, the question is, can I use a custom formatter and listen to before_feature and after_feature events to init and clean data? Is it allowed?
Thanks,
mkalakota
No, you cannot use a formatter for this. If you are trying to set up the data, then run many scenarios, then clean up the data, be aware that this makes your scenarios very fragile. Instead what you should do is setup the data for each scenario and clean it up at the end. You can do this very easily with background. e.g.
Feature: Lge data test
Background:
Given I have lge data
Scenario: foo
...
Scenario: bar
You would be better of making the loading of the lge data set fast (use SQL dump), and only using it when you absolutely have too. Feature hooks are an anti-pattern, which is why Cucumber doesn't support them.
I have hundreds of test specifications written in Spock. All of these are functional tests and can be run independently. But I have come across a situation where I need to run a specific test before running some other test.
This was very easy to achieve using Junit Test Suite and it was very straight forward in Eclipse. But since all my tests are groovy tests there is no easy way to create a Test Suite in Spring IDE for the spock tests (written in Groovy).
Can someone please share some ideas as to how we can create a Test suite and run some specific tests and also define the order of tests.
Any help would be much appreciated.
Spock specifications are valid JUnit tests (or suites) as well. That's why they are recognized by tools such as STS. You should be able to add it to the test suites as well as other JUnit test.
On the other hand it doesn't sound as a good practise if your tests depend on execution order.
If certain tasks needs to be performed before the test execution, it should be placed in setup() method. If that logic is common to more than one test, consider extracting it to the parent class.
If all you need is sequential execution of methods within a spec, have a look at #spock.lang.Stepwise, which is handy for testing workflows. Otherwise, you have the same possibilities as with plain JUnit: you can use JUnit (4) test suites, model test suites in your build tool of choice (which might not help within STS), or define test suites via Eclipse run configurations. I don't know how far support for the latter goes, but at the very least, it should allow you to run all tests in a package.
Although I think that it won't allow you to specify the order of the tests, you could use Spock's Runner configuration or #IgnoreIf/#Require built-in extensions. Have a look at my response to a similar question. It's probably also worth having a look at RunnerConfiguration javadoc as it shows that you can include classes directly instead of using annotations.
If the tests you want to run in a specific order are part of the same spock Specification, then you can use the #Stepwise annotation to direct that the tests (feature methods) are executed in the order they appear in the Specification class.
As others mentioned, its best to avoid this dependency if you can because of the complexity it introduces. For example, what happens if the first test fails? Does that leave the system in a undefined state for the subsequent tests? So it would be better to prevent the intra-test dependencies with setup() and cleanup() methods (or setupSpec() and cleanupSpec()).
Another option is to combine two dependent tests into a single multi-stage test with multiple when:/then: block pairs in sequence.