Are the built-in integration tests run concurrently or sequentially? - rust

I am writing integration tests to work with a database. In the start of each test, I clear the storage and create some data.
I want my tests to run sequentially to ensure that I am working with an empty database. But it seems that integration tests are run concurrently because sometimes I get existing documents after cleaning the database.
I checked the database and found that the documents created in different tests have approximately the same creation time, even when I'm adding a delay for each test (with std::thread::sleep_ms(10000)).
Can you clarify how the integration tests are run and is it possible run them in order?

The built-in testing framework runs concurrently by default. It is designed to offer useful but simple support for testing, that covers many needs, and a lot of functionality can/should be tested with each test independent of the others. (Being independent means they can be run in parallel.)
That said, it does listen to the RUST_TEST_THREADS environment variable, e.g. RUST_TEST_THREADS=1 cargo test will run tests on a single thread. However, if you want this functionality for your tests always, you may be interested in not using #[test], or, at least, not directly.
The most flexible way is via cargo's support for tests that completely define their own framework, via something like the following in your Cargo.toml:
[[test]]
name = "foo"
harness = false
With that, cargo test will compile and run tests/foo.rs as a binary. This can then ensure that operations are sequenced/reset appropriately.
Alternatively, maybe a framework like stainless has the functionality you need. (I've not used it so I'm not sure.)

An alternative to an env var is the --test-threads flag. Set it to a single thread to run your tests sequentially.
cargo test -- --test-threads 1

Related

In what order are Jest tests running?

I am having some problems with the order used by Jest to run tests.
Let's imagine that I have two tests:
module1.spec.ts
module2.spec.ts
Each of those modules use a common database. At the beginning of each file, I have a beforeEach call that drops the database and recreates fresh data.
If I launch module1.spec.ts, everything works fine. If I launch module2.spec.ts, everything works fine.
When I try to use the jest global command to launch all of my tests, it does not work.
My theory is that module1.spec.ts and module2.spec.ts are running in a different thread or at least are run kind of "at the same time".
The problem with that approach is that the two tests have to run one after the other because module1.spec.ts and module2.spec.ts are both dropping the database and creating data during their test.
Is it something I am missing concerning testing an application with a database?
Is their a Jest option to run the tests one after the other?
I encounterd this problem. By now my method is to export the test cases in module1.spec.js and module2.spec.js, and require them in a third file like index.spec.js, and custom the test cases running order in index.spec.js. I don't use --runInBand because I want other test files run parallelly with index.spec.js.

Heroku workers in dev

I'm looking into using a worker as well as a web for the first time as I have to scrape a website. I'm just wondering before I commit to this about working in a dev environment. How do jobs in a queue get handled when I'm testing my app before it's pushed to Heroku?
I will probably be using RabbitMQ if that's relevant here.
I guess it depends on what you mean by testing. You can unit test the code that does the scraping in isolation from any queue, and you can provide a mock implementation of the queue operations to handle a goodly portion of your integration tests.
I suppose you might want a real instance of the queue for certain tests, but depending on the nature of your project, you might be satisfied with the sorts of tests described in the first paragraph.
If you simply must test the queue operation and/or you want to run a complete copy of production locally then you'll have to stand up an instance of Rabbitmq. You can stand one up locally or use one of the SAAS providers.
If you have multiple developers working on the project, you might want to make it easy for them by creating something like a vagrant script that sets up a complete environment in a vm. Or better still something like docker. Doing so also gives you a lot more deployment options (making you less dependent on the heroku tooling).
Lastly, numerous CI solutions like Travis CI provide instances of popular services for running tests (including rabbit).

Browserstack runs does not update its capabilities

I was wondering if anyone else knows a good way to start individual browser stack tests sequentially using Capybara/Browserstack/Cucumber.
I'm having issues with using Capybara in the sense that browserstack doesn't get updated with my new capabilities for every run, even when I shut down my browser, i.e: The two test runs are started sequentually in Browserstack, but with the same browser and OS-settings.
Abstract Scenario: Run login tests
Given that I want to test x website with capabilities og
Examples:
|browser|browser_version| os |os_version|resolution|
|IE| 11.0 | Windows |8.1 |1024x768 |
|Firefox| 45.0 | Windows |10 |1024x768 |
I've checked that every value successfully gets sent through to the next step, but it seems like Browserstack doesn't update its new capabilities that I'm trying to set.
I know I can probably manage to do parallell runs setting capabilities through settings instead, but we have a limit to how many parallell runs using Browserstack's license. That's why I want to run them sequantually and figured this could be a way to do it.
As per my experience, BrowserStack initiates a test on a particular OS/browser capability that it receives from your tests. Thus, it seems your setup is sending the same capability for both the runs of the test.
I believe you want to run tests sequentially and on different OS/browser combinations. In that case you can refer to the BrowserStack's documentation for configuring Parallel Cucumber tests using Rake file in the "Parallel tests" section. After creating all the files, you can run the following command to run tests sequentially:
rake BS_USERNAME=<username> BS_AUTHKEY=<access_key> nodes=1

How can I automatically start the JMeter HTTP(S) Test Script Recorder?

I am trying to automate the creation of JMeter scripts based on existing Cucumber tests to avoid maintaining two separate sets of tests (one for acceptance and one for load testing).
The Cucumber recording works great locally when I add the HTTP Recorder to the Workbench and start the recording, however I cannot figure out how I can automatically start it from the command line. Is this possible at all?
Why not run Cucumber from JMeter?
Because I'd like to avoid running multiple instances of Cucumber at the same time, and I'd like to be able to distribute the load generation (using jmeter-server)
This is not possible yet.
You should discuss this on user mailing list to give more details on your request.
If this looks useful, then you would create an Enhancement request on JMeter bugzilla and feature may be developed.

Running IIS server with Coypu and SpecFlow

I have already spending a lot of time googling for some solution but I'm helpless !
I got an MVC application and I'm trying to do "integration testing" for my Views using Coypu and SpecFlow. But I don't know how I should manage IIS server for this. Is there a way to actually run the server (first start of tests) and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
Is there a better or simpler way to do this?
I'm fairly new to this too, so take the answers with a pinch of salt, but as noone else has answered...
Is there a way to actually run the server (first start of tests) ...
You could use IIS Express, which can be called via the command line. You can spin up your website before any tests run (which I believe you can do with the [BeforeTestRun] attribute in SpecFlow) with a call via System.Diagnostics.Process.
The actual command line would be something like e.g.
iisexpress.exe /path:c:\iisexpress\<your-site-published-to-filepath> /port:<anyport> /clr:v2.0
... and making the server use a special "test" DB (for example an in-memory RavenDB) emptied after each scenario (and filled during the background).
In order to use a special test DB, I guess it depends how your data access is working. If you can swap in an in-memory DB fairly easily then I guess you could do that. Although my understanding is that integration tests should be as close to production env as possible, so if possible use the same DBMS you're using in production.
What I'm doing is just doing a data restore to my test DB from a known backup of the prod DB, each time before the tests run. I can again call this via command-line/Process before my tests run. For my DB it's a fairly small dataset, and I can restore just the tables relevant to my tests, so this overhead isn't too prohibitive for integration tests. (It wouldn't be acceptable for unit tests however, which is where you would probably have mock repositories or in-memory data.)
Since you're already using SpecFlow take a look at SpecRun (http://www.specrun.com/).
It's a test runner which is designed for SpecFlow tests and adds all sorts of capabilities, from small conveniences like better formatting of the Test names in the Test Explorer to support for running the same SpecFlow test against multiple targets and config file transformations.
With SpecRun you define a "Profile" which will be used to run your tests, not dissimilar to the VS .runsettings file. In there you can specify:
<DeploymentTransformation>
<Steps>
<IISExpress webAppFolder="..\..\MyProject.Web" port="5555"/>
</Steps>
</DeploymentTransformation>
SpecRun will then start up an IISExpress instance running that Website before running your tests. In the same place you can also set up custom Deployment Transformations (using the standard App.Config transformations) to override the connection strings in your app's Web.config so that it points to the in-memory DB.
The only problem I've had with SpecRun is that the documentation isn't great, there are lots of video demonstrations but I'd much rather have a few written tutorials. I guess that's what StackOverflow is here for.

Resources