Is it possible to run Cucumber tests concurrently in Saucelabs? - cucumber

I've managed to get concurrent JUnit based tests running in Saucelabs using the Sauce ConcurrentParameterized JUnit runner (As described at https://wiki.saucelabs.com/display/DOCS/Java+Test+Setup+Example#JavaTestSetupExample-RunningTestsinParallel).
I'm wondering if there is a runner that achieves the same thing for Cucumber based tests?

I don't there is such a runner.
The Cucumber runner is, as far as I know, single threaded and doesn't execute tests in parallel. But executing in parallel is just half of your problem, the other half is connecting to Saucelabs. And that is not supported by Cucumber.
My current approach if I wanted to execute on Saucelabs would be to use JUnit and live with the fact that I'm lacking the nice scenarios that Cucumber bring to the table. This doesn't mean that the JUnit tests couldn't use the same helpers as the Cucumber steps does.

Related

Can mocha run .skip tests alongside normal tests?

I was wondering: is it possible to have mocha run tests marked with .skip() alongside default tests and have mocha display me only those .skip() that were performed successful?
My idea is, that this way I could disable tests that would currently not be fulfilled but mocha would tell me if any of these tests finally worked. To me this would be different than running tests without .skip() because then every failed tests would lead to my whole test run being failed.
Edit: Think of this like a .try() option which ignores failures and displays successful runs.
This is purely a technical question, I know that this idea surely doesn't fit well into testing conventions and best-practices so no discussions about ideal testing strategies and such ; )
Thank you!

How do I run tests in parallel on same browser / platform?

There is a config option called "maxConcurrency" which can be used to run same set of tests concurrently on different browser environments, but it's not the same as running tests in parallel.
Is this even possible? Based on Intern homepage, in comparison section, it says Intern supports "Runs tests in parallel for improved performance", which seems total invalid if this is not the case.
Update: I got it confirmed that Intern cannot run tests in parallel on same environment.

Are the built-in integration tests run concurrently or sequentially?

I am writing integration tests to work with a database. In the start of each test, I clear the storage and create some data.
I want my tests to run sequentially to ensure that I am working with an empty database. But it seems that integration tests are run concurrently because sometimes I get existing documents after cleaning the database.
I checked the database and found that the documents created in different tests have approximately the same creation time, even when I'm adding a delay for each test (with std::thread::sleep_ms(10000)).
Can you clarify how the integration tests are run and is it possible run them in order?
The built-in testing framework runs concurrently by default. It is designed to offer useful but simple support for testing, that covers many needs, and a lot of functionality can/should be tested with each test independent of the others. (Being independent means they can be run in parallel.)
That said, it does listen to the RUST_TEST_THREADS environment variable, e.g. RUST_TEST_THREADS=1 cargo test will run tests on a single thread. However, if you want this functionality for your tests always, you may be interested in not using #[test], or, at least, not directly.
The most flexible way is via cargo's support for tests that completely define their own framework, via something like the following in your Cargo.toml:
[[test]]
name = "foo"
harness = false
With that, cargo test will compile and run tests/foo.rs as a binary. This can then ensure that operations are sequenced/reset appropriately.
Alternatively, maybe a framework like stainless has the functionality you need. (I've not used it so I'm not sure.)
An alternative to an env var is the --test-threads flag. Set it to a single thread to run your tests sequentially.
cargo test -- --test-threads 1

How can I automatically start the JMeter HTTP(S) Test Script Recorder?

I am trying to automate the creation of JMeter scripts based on existing Cucumber tests to avoid maintaining two separate sets of tests (one for acceptance and one for load testing).
The Cucumber recording works great locally when I add the HTTP Recorder to the Workbench and start the recording, however I cannot figure out how I can automatically start it from the command line. Is this possible at all?
Why not run Cucumber from JMeter?
Because I'd like to avoid running multiple instances of Cucumber at the same time, and I'd like to be able to distribute the load generation (using jmeter-server)
This is not possible yet.
You should discuss this on user mailing list to give more details on your request.
If this looks useful, then you would create an Enhancement request on JMeter bugzilla and feature may be developed.

In jenkins How to set a upstream build unstable from it's downstream build

I have a job that build a project and a downstream job that use some scripts to test that.
Is there any way to change result of a build from a downstream build?
I tried using groovy script as below but did not work:
Hudson.instance.items[10].getLastBuild().setResult(hudson.model.Result.UNSTABLE)
You can use parametrised build plugin. It allows you to have your downstream builds as build task. Your upstream build can fail if any of the downstream builds fail.
In the job configuration, section "Post build actions", there's an "Aggregate downstream test results" option.
According to the help:
Because tests often dominates the execution time, a Hudson best practice involves splitting test executions into different jobs, possibly in multiple different jobs.
When you do that, setting test aggregation is a convenient way of collecting all the test results from such downstream test jobs and display it along with the build that they are testing. In this way, people can see the overall test status of the given build quickly.
That should do what you need.

Resources