Creating Test suites in Spring IDE for the Spock Test specs - groovy

I have hundreds of test specifications written in Spock. All of these are functional tests and can be run independently. But I have come across a situation where I need to run a specific test before running some other test.
This was very easy to achieve using Junit Test Suite and it was very straight forward in Eclipse. But since all my tests are groovy tests there is no easy way to create a Test Suite in Spring IDE for the spock tests (written in Groovy).
Can someone please share some ideas as to how we can create a Test suite and run some specific tests and also define the order of tests.
Any help would be much appreciated.

Spock specifications are valid JUnit tests (or suites) as well. That's why they are recognized by tools such as STS. You should be able to add it to the test suites as well as other JUnit test.
On the other hand it doesn't sound as a good practise if your tests depend on execution order.
If certain tasks needs to be performed before the test execution, it should be placed in setup() method. If that logic is common to more than one test, consider extracting it to the parent class.

If all you need is sequential execution of methods within a spec, have a look at #spock.lang.Stepwise, which is handy for testing workflows. Otherwise, you have the same possibilities as with plain JUnit: you can use JUnit (4) test suites, model test suites in your build tool of choice (which might not help within STS), or define test suites via Eclipse run configurations. I don't know how far support for the latter goes, but at the very least, it should allow you to run all tests in a package.

Although I think that it won't allow you to specify the order of the tests, you could use Spock's Runner configuration or #IgnoreIf/#Require built-in extensions. Have a look at my response to a similar question. It's probably also worth having a look at RunnerConfiguration javadoc as it shows that you can include classes directly instead of using annotations.

If the tests you want to run in a specific order are part of the same spock Specification, then you can use the #Stepwise annotation to direct that the tests (feature methods) are executed in the order they appear in the Specification class.
As others mentioned, its best to avoid this dependency if you can because of the complexity it introduces. For example, what happens if the first test fails? Does that leave the system in a undefined state for the subsequent tests? So it would be better to prevent the intra-test dependencies with setup() and cleanup() methods (or setupSpec() and cleanupSpec()).
Another option is to combine two dependent tests into a single multi-stage test with multiple when:/then: block pairs in sequence.

Related

Java Cucumber rerun certain failed scenarios

I would like to run all my features and then after all of them to re-run all the scenarios that failed and have certain tag (for example "rerun-on-fail" tag).
I can always parse the results of the first run and then run all the filtered scenarios manually, but I was wondering if there could be a way to dynamically add scenarios to the "queue" on runtime. Probably by making custom test runner. Although Cucumber runner class it final and overall cucumber code doesn't seem to be opened to extension much. Any ideas how to achieve this?
Edit: looks like there's interface FeatureSupplier, which looks pretty good for this.

How can I run only integration tests

Is there a way to only run integration tests, but not unit-tests?
I've tried:
cargo test --tests: runs unit + integration tests
cargo test --test test_name: runs one specified test
Is it currently not possible to only run integration tests or am I missing something?
You can run ONLY the integration tests by:
cargo test --test '*'
Please note that only '*' will work; neither * nor "*" works.
Reference: https://github.com/rust-lang/cargo/issues/8396
Thing is, Cargo doesn't really distinguish between integration tests and unit tests, since there isn't a real difference between the two in terms of how you manage and implement them; the difference is purely semantic. Not all codebases even have that separation. The book and the reference call them unit tests and integration tests for simplicity and to avoid confusion, but technically there is no such distinction.
Instead of separating tests into two logical categories, Cargo has a flexible filtering system, which allows you to only run tests when their name matches a certain pattern. The book has a section dedicated to this system. If you'd like to filter out certain tests because they took a long time to run or are otherwise undesirable to be run along with all others, annotate a test with #[ignore]. Otherwise, use a certain naming methodology for the tests so that you can filter them out by their name.
The Cargo reference page also mentions the fact that you can use the target options in the Cargo.toml manifest to control what is run when you use --tests.

Creating multiple implementations of a gherkin feature file

Suppose I have a feature file listing a number of scenarios (the actual contents of the feature file are irrelevant).
I would like to reuse the same feature file and provide a number of implementations:
Unit test suite implementation.
This implementation would mock away external aspects such as the DB/repositories.
This implementation would be fast to run.
Acceptance integration test suite implementation.
This would be run in a staging environment and would not mock anything (except perhaps external services where appropriate).
This implementation would be slow to run (because it requires all infrastructure to be up and running).
I have tried:
Placing the feature files in their own sub-project in a mono-repo.
Have other sub-projects depend on the feature files.
Implementing the tests.
Although this works I can no longer jump from the feature file to the step definitions (because they are in a different module) in IntelliJ (which lessens the appeal).
Has anyone else had any experience of doing something similar? Or would you recommend against doing this?
We have done something similar.
What you could do is specify 2 different runners, RunCucumberTest and RunCucumberIT, where the first would be used to run unit tests and should point to the step definitions for the unit tests and the second would be used to run integration tests and should point to the step definitions for the integration tests. In the #CucumberOptions you can specify which step definitions (glue) the runner should use; just make sure to separate the "unit test" step definitions and the "integration test" step definitions in separate files/directories.
If there are any step definitions that don't depend on the distinction between unit and integration test, those could be in a "shared" step definitions file/directory, and called by both runners.
Files ending in *Test should be picked up in unit testing phase by Surefire.
Files ending in *IT should be picked up in unit testing phase by Failsafe.
Hope this helps.

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

Spock vs FitNesse

I've been looking into Spock and I've had experience with FitNesse. I'm wondering how would people choose one over the other - if they appear to be addressing the same or similar problem space.
Also for the folks who have been using Spock or other groovy code for tests, do you see any noticeable performance degradation? Tests are supposed to give immediate feedback - as we know that if the tests take longer to run, the developer tends to run them less frequently - so I'm wondering if the reduction in speed of test execution has had any impact in the real world.
Thanks
I am no FitNesse guy, so please take what I say with a grain of salt. To me it seems what FitNesse is trying to do is to provide a programming language independent environment to specify tests. They use it to have a more visual interface with the programmer. In Spock a Groovy ast transform is used to transform the table into a groovy program.
Since you basically stay in a programming language it is in Spock more easy to realize more complicated test setups. As a result you often seem to have to write fixture code in FitNesse.
I personally don't need a test execution button, I like the direct approach. I like not having to take of even more classes, only to enable testing and I like looking at the code directly. For example I want to just execute my test from the command line, not from a web interface. That is surely possible in FitNesse too, but as a result the whole visual thing FitNesse is trying to give the user is just ballast for me. That's why I would choose Spock over FitNesse.
The advantage of the language agnostic approach is of course, that a lot of test specifications can be used for Java and for .Net. so if that is a requirement for you, you may want to judge different. It usually is not to me.
As for performance, I would not worry too much about that part.

Resources