How can I run only integration tests - rust

Is there a way to only run integration tests, but not unit-tests?
I've tried:
cargo test --tests: runs unit + integration tests
cargo test --test test_name: runs one specified test
Is it currently not possible to only run integration tests or am I missing something?

You can run ONLY the integration tests by:
cargo test --test '*'
Please note that only '*' will work; neither * nor "*" works.
Reference: https://github.com/rust-lang/cargo/issues/8396

Thing is, Cargo doesn't really distinguish between integration tests and unit tests, since there isn't a real difference between the two in terms of how you manage and implement them; the difference is purely semantic. Not all codebases even have that separation. The book and the reference call them unit tests and integration tests for simplicity and to avoid confusion, but technically there is no such distinction.
Instead of separating tests into two logical categories, Cargo has a flexible filtering system, which allows you to only run tests when their name matches a certain pattern. The book has a section dedicated to this system. If you'd like to filter out certain tests because they took a long time to run or are otherwise undesirable to be run along with all others, annotate a test with #[ignore]. Otherwise, use a certain naming methodology for the tests so that you can filter them out by their name.
The Cargo reference page also mentions the fact that you can use the target options in the Cargo.toml manifest to control what is run when you use --tests.

Related

Can I test just the code in a single module?

I've got a Rust project that uses a fairly large framework. Compilation and macro expansion take a really long time. If I make a tiny change to the code, it takes a minute or more before before "cargo test" actually executes.
Is it possible to create a sub-project or sub-module within the same crate and test just the code in the module, assuming there are no dependencies on code outside the module?
You might be interested in "cargo workspaces" (https://doc.rust-lang.org/book/ch14-03-cargo-workspaces.html).
Essentially, instead of splitting your code into multiple mods, you split it into multiple crates. These crates can depend on each other via "path dependencies". For example, you could have something like:
[dependencies]
my_helper_crate = { path = "path/to/crate" }
The book has much more detail on this, but a nice feature of using workspaces is that your crates can have separate Cargo.tomls, but share a Cargo.lock, so you won't get issues around incompatible versions of crates.
With this setup, you can build one crate without building the rest of them, so you can cut down on a dev feedback loop.
However, if you have crate_a which depends on crate_b, building crate_a still requires building crate_b, there's not really any getting around that. The benefit is mainly for the leaves of your dependency graph.
Yeah, cargo test will take arguments which match specific tests that you want to run (Cargo book). For example, if you have modules foo and bar, you can run cargo test foo to run tests from that module, excluding all others.

Adding GitLab hooks to catch Merge Requests that lack unit tests

I’ve been tasked with investigating whether something can be added to our MR approval process to highlight whether proposed Merge Requests (MRs) in GitLab, which are C++ based code, contain any new unit tests or modifications to existing tests. The overall aim is to remind developers and approvers that they need to consider unit testing.
Ideally a small script would run and detect the presence of additional tests or changes (this I can easily write, and I accept that there’s a limit to how much can be done here) and display a warning on the MR if they weren’t detected.
An addition step, if at all possible, would be to block the MR until either, further commits were pushed that meet the criteria, or an (extra/custom) GitLab MR field is completed explaining why unit testing is not appropriate for this change. This field would be held with the MR for audit purposes. I accept that this is not foolproof but am hoping to pilot this as part of a bigger push for more unit test coverage.
As mentioned, I can easily write a script in, say, Python to check for unit tests in the commit(s), but what I don’t know is whether/how I can hook this into the GitLab MR process (I looked at web-hooks but they seem to focus on notifying other systems rather than being transactional) and whether GitLab is extensible enough for us to achieve the additional step above. Any thoughts? Can this be done, and if so, how would I go about it?
measuring the lack of unit tests
detect the presence of additional tests or changes
i think you are looking for the wrong thing here.
the fact that tests have changed or that there are any additional tests does not mean, that the MR contains any unit tests for the submitted code.
the underlying problem is of course a hard one.
a good approximation of what you want is typically to check how many lines of code are covered by the test-suite.
if the testsuite tests more LOCs after the MR than before, then the developer has done their homework and the testsuite has improved. if the coverage has grown smaller, than there is a problem.
of course it's still possible for a user to submit unit tests that are totally unrelated to their code changes, but at least the overall coverage has improved (or: if you already have a 100% coverage before the MR, then any MR that keeps the coverage at 100% and adds new code has obviously added unit tests for the new code).
finally, to come to your question
yes, it's possible to configure a gitlab-project to report the test-coverage change introduced by an MR.
https://docs.gitlab.com/ee/ci/pipelines/settings.html#test-coverage-parsing
you obviously need to create a coverage report from your unittest run.
how you do this depends on the unittesting framework you are using, but the gitlab documentation gives some hints.
You don't need a web hook or anything like that. This should be something you can more or less trivially solve with just an extra job in your .gitlab-ci.yml. Run your python script and have it exit nonzero if there are no new tests, ideally with an error message indicating that new tests are required. Now when MRs are posted your job will run and if there are no new tests the pipeline will fail.
If you want the pipeline to fail very fast, you can put this new job at the head of the pipeline so that nothing else runs if this one fails.
You will probably want to make it conditional so that it only runs as part of an MR, otherwise you might get false failures (e.g. if just running the pipeline against some arbitrary commit on a branch).

Creating multiple implementations of a gherkin feature file

Suppose I have a feature file listing a number of scenarios (the actual contents of the feature file are irrelevant).
I would like to reuse the same feature file and provide a number of implementations:
Unit test suite implementation.
This implementation would mock away external aspects such as the DB/repositories.
This implementation would be fast to run.
Acceptance integration test suite implementation.
This would be run in a staging environment and would not mock anything (except perhaps external services where appropriate).
This implementation would be slow to run (because it requires all infrastructure to be up and running).
I have tried:
Placing the feature files in their own sub-project in a mono-repo.
Have other sub-projects depend on the feature files.
Implementing the tests.
Although this works I can no longer jump from the feature file to the step definitions (because they are in a different module) in IntelliJ (which lessens the appeal).
Has anyone else had any experience of doing something similar? Or would you recommend against doing this?
We have done something similar.
What you could do is specify 2 different runners, RunCucumberTest and RunCucumberIT, where the first would be used to run unit tests and should point to the step definitions for the unit tests and the second would be used to run integration tests and should point to the step definitions for the integration tests. In the #CucumberOptions you can specify which step definitions (glue) the runner should use; just make sure to separate the "unit test" step definitions and the "integration test" step definitions in separate files/directories.
If there are any step definitions that don't depend on the distinction between unit and integration test, those could be in a "shared" step definitions file/directory, and called by both runners.
Files ending in *Test should be picked up in unit testing phase by Surefire.
Files ending in *IT should be picked up in unit testing phase by Failsafe.
Hope this helps.

Is it possible to configure detox to only run a subset of the matched *.spec.js files?

As far as I can tell from the Detox docs, issues, and StackOverflow questions, there is no way to configure Detox so it only runs a subset of the matched tests (*.spec.js).
Does anyone know how to do this? I want to ask on here before I file an issue on the repo.
Most of the time it's desirable to simply run all matched tests . But in certain scenarios, it would be nice to only run a subset.
For example: I want to use Jest for 1) Acceptance tests + PR gating and 2) Traversing the app and generating screenshots of the various screens. Use case 1 is fast and lightweight. Use case 2 is expensive and will take a long time.
For each use case, I only want to run the tests for that use case. Does any know how to do this? I can think of several hacky approaches (file renaming, conditional logic in tests that keys on env variables, etc...), but I think this should be a supported thing.

Creating Test suites in Spring IDE for the Spock Test specs

I have hundreds of test specifications written in Spock. All of these are functional tests and can be run independently. But I have come across a situation where I need to run a specific test before running some other test.
This was very easy to achieve using Junit Test Suite and it was very straight forward in Eclipse. But since all my tests are groovy tests there is no easy way to create a Test Suite in Spring IDE for the spock tests (written in Groovy).
Can someone please share some ideas as to how we can create a Test suite and run some specific tests and also define the order of tests.
Any help would be much appreciated.
Spock specifications are valid JUnit tests (or suites) as well. That's why they are recognized by tools such as STS. You should be able to add it to the test suites as well as other JUnit test.
On the other hand it doesn't sound as a good practise if your tests depend on execution order.
If certain tasks needs to be performed before the test execution, it should be placed in setup() method. If that logic is common to more than one test, consider extracting it to the parent class.
If all you need is sequential execution of methods within a spec, have a look at #spock.lang.Stepwise, which is handy for testing workflows. Otherwise, you have the same possibilities as with plain JUnit: you can use JUnit (4) test suites, model test suites in your build tool of choice (which might not help within STS), or define test suites via Eclipse run configurations. I don't know how far support for the latter goes, but at the very least, it should allow you to run all tests in a package.
Although I think that it won't allow you to specify the order of the tests, you could use Spock's Runner configuration or #IgnoreIf/#Require built-in extensions. Have a look at my response to a similar question. It's probably also worth having a look at RunnerConfiguration javadoc as it shows that you can include classes directly instead of using annotations.
If the tests you want to run in a specific order are part of the same spock Specification, then you can use the #Stepwise annotation to direct that the tests (feature methods) are executed in the order they appear in the Specification class.
As others mentioned, its best to avoid this dependency if you can because of the complexity it introduces. For example, what happens if the first test fails? Does that leave the system in a undefined state for the subsequent tests? So it would be better to prevent the intra-test dependencies with setup() and cleanup() methods (or setupSpec() and cleanupSpec()).
Another option is to combine two dependent tests into a single multi-stage test with multiple when:/then: block pairs in sequence.

Resources