Browserstack runs does not update its capabilities - cucumber

I was wondering if anyone else knows a good way to start individual browser stack tests sequentially using Capybara/Browserstack/Cucumber.
I'm having issues with using Capybara in the sense that browserstack doesn't get updated with my new capabilities for every run, even when I shut down my browser, i.e: The two test runs are started sequentually in Browserstack, but with the same browser and OS-settings.
Abstract Scenario: Run login tests
Given that I want to test x website with capabilities og
Examples:
|browser|browser_version| os |os_version|resolution|
|IE| 11.0 | Windows |8.1 |1024x768 |
|Firefox| 45.0 | Windows |10 |1024x768 |
I've checked that every value successfully gets sent through to the next step, but it seems like Browserstack doesn't update its new capabilities that I'm trying to set.
I know I can probably manage to do parallell runs setting capabilities through settings instead, but we have a limit to how many parallell runs using Browserstack's license. That's why I want to run them sequantually and figured this could be a way to do it.

As per my experience, BrowserStack initiates a test on a particular OS/browser capability that it receives from your tests. Thus, it seems your setup is sending the same capability for both the runs of the test.
I believe you want to run tests sequentially and on different OS/browser combinations. In that case you can refer to the BrowserStack's documentation for configuring Parallel Cucumber tests using Rake file in the "Parallel tests" section. After creating all the files, you can run the following command to run tests sequentially:
rake BS_USERNAME=<username> BS_AUTHKEY=<access_key> nodes=1

Related

Is there a reason for splitting a mochajs test suite into multiple files if I am going to run it serially?

I am currently working on my first test suite using selenium with mocha.js in node.js. I will also need to automate GUI outside of the browser in most of my test cases (I am doing this with robot.js) and because of that, I will run it serially on a local Jenkins instance on a dedicated PC just for automation.
I am wondering if in my case does it make any sense to split my test suite into multiple files? I can skip failed tests, I have failure reporting connected to slack webhook, so how would I benefit from having multiple files instead of a single well-refactored and commented file?

Running integration tests automatically on github

Is there any existing tooling/platform I can use to do the following?
On any github PR or commit, have a custom "check", e.g the same as how travis-ci works.
Have this task talk to a remote machine on azure.
Execute a script on this machine and collect logs/exit code
Fail the check if the code is none zero or timeout is reached.
Handle queuing if two PR's come in, clean up on abort etc.
Have some sort of "status" badge like travis-ci to see the current test state/pass rate.
So far only travis-ci itself seems to work something like this, but whatever I execute will run in their cloud so I don't "own" the machine. Additionally my integration tests require copyrighted data which needs to be kept safe on my own cloud machine, and could take multiple hours to complete.
Yes you can. https://help.github.com/articles/about-webhooks/ describes how to do this. Your machine will need to be accessible to github to do this.

Are the built-in integration tests run concurrently or sequentially?

I am writing integration tests to work with a database. In the start of each test, I clear the storage and create some data.
I want my tests to run sequentially to ensure that I am working with an empty database. But it seems that integration tests are run concurrently because sometimes I get existing documents after cleaning the database.
I checked the database and found that the documents created in different tests have approximately the same creation time, even when I'm adding a delay for each test (with std::thread::sleep_ms(10000)).
Can you clarify how the integration tests are run and is it possible run them in order?
The built-in testing framework runs concurrently by default. It is designed to offer useful but simple support for testing, that covers many needs, and a lot of functionality can/should be tested with each test independent of the others. (Being independent means they can be run in parallel.)
That said, it does listen to the RUST_TEST_THREADS environment variable, e.g. RUST_TEST_THREADS=1 cargo test will run tests on a single thread. However, if you want this functionality for your tests always, you may be interested in not using #[test], or, at least, not directly.
The most flexible way is via cargo's support for tests that completely define their own framework, via something like the following in your Cargo.toml:
[[test]]
name = "foo"
harness = false
With that, cargo test will compile and run tests/foo.rs as a binary. This can then ensure that operations are sequenced/reset appropriately.
Alternatively, maybe a framework like stainless has the functionality you need. (I've not used it so I'm not sure.)
An alternative to an env var is the --test-threads flag. Set it to a single thread to run your tests sequentially.
cargo test -- --test-threads 1

How can I automatically start the JMeter HTTP(S) Test Script Recorder?

I am trying to automate the creation of JMeter scripts based on existing Cucumber tests to avoid maintaining two separate sets of tests (one for acceptance and one for load testing).
The Cucumber recording works great locally when I add the HTTP Recorder to the Workbench and start the recording, however I cannot figure out how I can automatically start it from the command line. Is this possible at all?
Why not run Cucumber from JMeter?
Because I'd like to avoid running multiple instances of Cucumber at the same time, and I'd like to be able to distribute the load generation (using jmeter-server)
This is not possible yet.
You should discuss this on user mailing list to give more details on your request.
If this looks useful, then you would create an Enhancement request on JMeter bugzilla and feature may be developed.

NodeJS packages to handle parallel headless tests on linux box(es) with selenium grid like features?

I need to handle authenticated multiple users running parallel tests on the selenium standalone server, and discovered two webdriver clients on nodejs. There's webdriver-js and wd-js. Which is more active and reliable? Any experiences? I'm a bit concerned about them breaking down when node or selenium updates or removes features.
I don't think any of those packages mention automatically starting Xvfb on a unique display number per test. So start shell commands to run xvfb before driving the browser?
The following process is what I am trying to build in nodejs (it's essentially like Grid 2 but on nodejs purpose of continuous integration of tests running) and looking for any packages or suggestions for any of the following part.
First authenticate the user(s) using a persistent bi-directional connection (WebSockets or HTTP 1.1)
Start/queue tests requested to run by the user on available hardware nodes (I will add more linux boxes so need a package to distribute parallel tests across the "grid")
Monitor the running selenium browser tests and send client status updates (ex) running/stop)
Tests submitted by the users need to be persistent and accessible for future or continuous integration (couchdb or mysql)
Scheduling of jobs to be run on a continuous basis (ex. run every set interval of time).
Is nodejs a bit overkill? should I focus on Java only for the backside?
https://github.com/LearnBoost/soda
This is for vanilla Sauce Labs/Selenium RC integration. I'd imagine when you're running in a browser instance like Selenium RC, websockets should just work, as the javascript on the page is executed. If you're authenticating a user, you want to just fill out whatever form and submit (which triggers your WS auth) as normal.
I don't think nodejs is overkill for this. Node is lightweight. I don't know that I'd add node to my stack ONLY for this, but its certainly convenient and if you have a commitment to javascript, its no big deal.

Resources