I was wondering: is it possible to have mocha run tests marked with .skip() alongside default tests and have mocha display me only those .skip() that were performed successful?
My idea is, that this way I could disable tests that would currently not be fulfilled but mocha would tell me if any of these tests finally worked. To me this would be different than running tests without .skip() because then every failed tests would lead to my whole test run being failed.
Edit: Think of this like a .try() option which ignores failures and displays successful runs.
This is purely a technical question, I know that this idea surely doesn't fit well into testing conventions and best-practices so no discussions about ideal testing strategies and such ; )
Thank you!
Related
We use Bitbucket pipelines in our CI for testing,
Our application is NestJS with Typescript tested with Jest.
We always got all tests running, however few days from now (2022 may) the tests are stuck after some suit, the suite where the test stuck is quite random.
The tests dont fail, we dont have any memory warning or anything else, it just is stucked on the pipeline. We need to stop the pipeline because it never stop.
Unfourtunately we dont any error for furgher investigation.
What could we do to inspect more details?
I was facing the same issue and I noticed that jest was consuming all the resources, so what worked for me was to set CPU usage limit during the tests using the following command:
jest --maxWorkers=20%
And I found this solution reading this amazing article here
Without this parameter, jest would consume all the resources of the docker machine on Bitbucket, potentially increasing runtime.
Another solution that worked better for me than the above was to double the size of the build container. You will also get faster pipelines (albeit at marginally higher cost), so just do the tradeoff to see what works best in your case. You can double the size of the build container with the following size: 2x option in the bitbucket-pipelines.yml.
...
- step
name: Run Unit Tests
image: node:14.17.0
size: 2x
...
More info here: https://support.atlassian.com/bitbucket-cloud/docs/configure-bitbucket-pipelinesyml/
You could try using the --runInband flag so you use one cache storage for all your tests instead of a cache per thread.
yarn jest --ci --runInBand
More details here
I want to do integration tests without actually mocking anything.
I use test db and scripts to put all example data to all entities and then I want to run tests on this data.
Now I want to use jest's expect() function inside special testing service working as an ordinary Nest service. And just trigger controller to see what happening during the whole workflow.
Can I do that?
"expect" is now available as a standalone module. https://www.npmjs.com/package/expect
With this, you can use the jest-matchers in your special testing service.
Failures will throw Errors that are visualized and understood by jest in the same way jest visualizes errors caused by expect()-failures in the test-code itself.
I am having some problems with the order used by Jest to run tests.
Let's imagine that I have two tests:
module1.spec.ts
module2.spec.ts
Each of those modules use a common database. At the beginning of each file, I have a beforeEach call that drops the database and recreates fresh data.
If I launch module1.spec.ts, everything works fine. If I launch module2.spec.ts, everything works fine.
When I try to use the jest global command to launch all of my tests, it does not work.
My theory is that module1.spec.ts and module2.spec.ts are running in a different thread or at least are run kind of "at the same time".
The problem with that approach is that the two tests have to run one after the other because module1.spec.ts and module2.spec.ts are both dropping the database and creating data during their test.
Is it something I am missing concerning testing an application with a database?
Is their a Jest option to run the tests one after the other?
I encounterd this problem. By now my method is to export the test cases in module1.spec.js and module2.spec.js, and require them in a third file like index.spec.js, and custom the test cases running order in index.spec.js. I don't use --runInBand because I want other test files run parallelly with index.spec.js.
I've managed to get concurrent JUnit based tests running in Saucelabs using the Sauce ConcurrentParameterized JUnit runner (As described at https://wiki.saucelabs.com/display/DOCS/Java+Test+Setup+Example#JavaTestSetupExample-RunningTestsinParallel).
I'm wondering if there is a runner that achieves the same thing for Cucumber based tests?
I don't there is such a runner.
The Cucumber runner is, as far as I know, single threaded and doesn't execute tests in parallel. But executing in parallel is just half of your problem, the other half is connecting to Saucelabs. And that is not supported by Cucumber.
My current approach if I wanted to execute on Saucelabs would be to use JUnit and live with the fact that I'm lacking the nice scenarios that Cucumber bring to the table. This doesn't mean that the JUnit tests couldn't use the same helpers as the Cucumber steps does.
I am writing integration tests to work with a database. In the start of each test, I clear the storage and create some data.
I want my tests to run sequentially to ensure that I am working with an empty database. But it seems that integration tests are run concurrently because sometimes I get existing documents after cleaning the database.
I checked the database and found that the documents created in different tests have approximately the same creation time, even when I'm adding a delay for each test (with std::thread::sleep_ms(10000)).
Can you clarify how the integration tests are run and is it possible run them in order?
The built-in testing framework runs concurrently by default. It is designed to offer useful but simple support for testing, that covers many needs, and a lot of functionality can/should be tested with each test independent of the others. (Being independent means they can be run in parallel.)
That said, it does listen to the RUST_TEST_THREADS environment variable, e.g. RUST_TEST_THREADS=1 cargo test will run tests on a single thread. However, if you want this functionality for your tests always, you may be interested in not using #[test], or, at least, not directly.
The most flexible way is via cargo's support for tests that completely define their own framework, via something like the following in your Cargo.toml:
[[test]]
name = "foo"
harness = false
With that, cargo test will compile and run tests/foo.rs as a binary. This can then ensure that operations are sequenced/reset appropriately.
Alternatively, maybe a framework like stainless has the functionality you need. (I've not used it so I'm not sure.)
An alternative to an env var is the --test-threads flag. Set it to a single thread to run your tests sequentially.
cargo test -- --test-threads 1