I am having some problems with the order used by Jest to run tests.
Let's imagine that I have two tests:
module1.spec.ts
module2.spec.ts
Each of those modules use a common database. At the beginning of each file, I have a beforeEach call that drops the database and recreates fresh data.
If I launch module1.spec.ts, everything works fine. If I launch module2.spec.ts, everything works fine.
When I try to use the jest global command to launch all of my tests, it does not work.
My theory is that module1.spec.ts and module2.spec.ts are running in a different thread or at least are run kind of "at the same time".
The problem with that approach is that the two tests have to run one after the other because module1.spec.ts and module2.spec.ts are both dropping the database and creating data during their test.
Is it something I am missing concerning testing an application with a database?
Is their a Jest option to run the tests one after the other?
I encounterd this problem. By now my method is to export the test cases in module1.spec.js and module2.spec.js, and require them in a third file like index.spec.js, and custom the test cases running order in index.spec.js. I don't use --runInBand because I want other test files run parallelly with index.spec.js.
Related
I am currently working on my first test suite using selenium with mocha.js in node.js. I will also need to automate GUI outside of the browser in most of my test cases (I am doing this with robot.js) and because of that, I will run it serially on a local Jenkins instance on a dedicated PC just for automation.
I am wondering if in my case does it make any sense to split my test suite into multiple files? I can skip failed tests, I have failure reporting connected to slack webhook, so how would I benefit from having multiple files instead of a single well-refactored and commented file?
I want to do integration tests without actually mocking anything.
I use test db and scripts to put all example data to all entities and then I want to run tests on this data.
Now I want to use jest's expect() function inside special testing service working as an ordinary Nest service. And just trigger controller to see what happening during the whole workflow.
Can I do that?
"expect" is now available as a standalone module. https://www.npmjs.com/package/expect
With this, you can use the jest-matchers in your special testing service.
Failures will throw Errors that are visualized and understood by jest in the same way jest visualizes errors caused by expect()-failures in the test-code itself.
I was wondering: is it possible to have mocha run tests marked with .skip() alongside default tests and have mocha display me only those .skip() that were performed successful?
My idea is, that this way I could disable tests that would currently not be fulfilled but mocha would tell me if any of these tests finally worked. To me this would be different than running tests without .skip() because then every failed tests would lead to my whole test run being failed.
Edit: Think of this like a .try() option which ignores failures and displays successful runs.
This is purely a technical question, I know that this idea surely doesn't fit well into testing conventions and best-practices so no discussions about ideal testing strategies and such ; )
Thank you!
I am writing integration tests to work with a database. In the start of each test, I clear the storage and create some data.
I want my tests to run sequentially to ensure that I am working with an empty database. But it seems that integration tests are run concurrently because sometimes I get existing documents after cleaning the database.
I checked the database and found that the documents created in different tests have approximately the same creation time, even when I'm adding a delay for each test (with std::thread::sleep_ms(10000)).
Can you clarify how the integration tests are run and is it possible run them in order?
The built-in testing framework runs concurrently by default. It is designed to offer useful but simple support for testing, that covers many needs, and a lot of functionality can/should be tested with each test independent of the others. (Being independent means they can be run in parallel.)
That said, it does listen to the RUST_TEST_THREADS environment variable, e.g. RUST_TEST_THREADS=1 cargo test will run tests on a single thread. However, if you want this functionality for your tests always, you may be interested in not using #[test], or, at least, not directly.
The most flexible way is via cargo's support for tests that completely define their own framework, via something like the following in your Cargo.toml:
[[test]]
name = "foo"
harness = false
With that, cargo test will compile and run tests/foo.rs as a binary. This can then ensure that operations are sequenced/reset appropriately.
Alternatively, maybe a framework like stainless has the functionality you need. (I've not used it so I'm not sure.)
An alternative to an env var is the --test-threads flag. Set it to a single thread to run your tests sequentially.
cargo test -- --test-threads 1
I have already spending a lot of time googling for some solution but I'm helpless !
I got an MVC application and I'm trying to do "integration testing" for my Views using Coypu and SpecFlow. But I don't know how I should manage IIS server for this. Is there a way to actually run the server (first start of tests) and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
Is there a better or simpler way to do this?
I'm fairly new to this too, so take the answers with a pinch of salt, but as noone else has answered...
Is there a way to actually run the server (first start of tests) ...
You could use IIS Express, which can be called via the command line. You can spin up your website before any tests run (which I believe you can do with the [BeforeTestRun] attribute in SpecFlow) with a call via System.Diagnostics.Process.
The actual command line would be something like e.g.
iisexpress.exe /path:c:\iisexpress\<your-site-published-to-filepath> /port:<anyport> /clr:v2.0
... and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
In order to use a special test DB, I guess it depends how your data access is working. If you can swap in an in-memory DB fairly easily then I guess you could do that. Although my understanding is that integration tests should be as close to production env as possible, so if possible use the same DBMS you're using in production.
What I'm doing is just doing a data restore to my test DB from a known backup of the prod DB, each time before the tests run. I can again call this via command-line/Process before my tests run. For my DB it's a fairly small dataset, and I can restore just the tables relevant to my tests, so this overhead isn't too prohibitive for integration tests. (It wouldn't be acceptable for unit tests however, which is where you would probably have mock repositories or in-memory data.)
Since you're already using SpecFlow take a look at SpecRun (http://www.specrun.com/).
It's a test runner which is designed for SpecFlow tests and adds all sorts of capabilities, from small conveniences like better formatting of the Test names in the Test Explorer to support for running the same SpecFlow test against multiple targets and config file transformations.
With SpecRun you define a "Profile" which will be used to run your tests, not dissimilar to the VS .runsettings file. In there you can specify:
<DeploymentTransformation>
<Steps>
<IISExpress webAppFolder="..\..\MyProject.Web" port="5555"/>
</Steps>
</DeploymentTransformation>
SpecRun will then start up an IISExpress instance running that Website before running your tests. In the same place you can also set up custom Deployment Transformations (using the standard App.Config transformations) to override the connection strings in your app's Web.config so that it points to the in-memory DB.
The only problem I've had with SpecRun is that the documentation isn't great, there are lots of video demonstrations but I'd much rather have a few written tutorials. I guess that's what StackOverflow is here for.