I am writing selenium test cases of workflow application.
I am having two diffrent TestClasses in Mstest Project, Let say ClassA,ClassB,ClassC and so on.
How to order Execution of TestClasses in Mstest.
Thanks
You can't control the execution order - the test should be able to run in any order. They should not depend on any global state - that will only lead to pain in the end. Try to remove all dependencies between tests and test classes.
Related
Our codebase has more than 100 projects with tests in each. Some test assemblies take more time while some other assemblies are taking less time for the execution of the tests.
The Azure DevOps Server is running our whole test suit in parallel, which makes it really fast.
But the problem is, that the long running tests are started in the middle of the testrun, which has the effect, that the whole testrun will be longer.
Is there a way, to influence the order of how and when the test assemblies are started? I want to start the long running test assemblies first and after that the fast test assemblies.
Since you are running the Test in parallel, you could try to use the Based on past running time of tests option in Visual Studio Test task.
According to this doc about Parallel test:
This setting considers past running times to create slices of tests so that each slice has approximately the same running time. Short-running tests will be batched together, while long-running tests will be allocated to separate slices.
This option allows tests to be run in groups based on running time. Finally , each group will be completed in a similar time.
Hope this helps.
We have achieved this by arranging the project-folders so they sort to give the longest running test assemblies first. You can see the order that VSTest finds the assemblies in the Azure DevOps output. From there you can rename folder to affect the order.
It would be nice if there was another way to effect this.
The requirement is to run all the tests belonging to a suite in a certain thread.
For example having suite1, suite2, suite3 and so on.
And I'd like to make all the tests belonging to the suite1 executed in one thread(thread1), whereas tests belonging to the suite2 in another thread(thread2) and tests from suite3 in one more separate thread(thread3).
As per xdist there are only 2 options:
- --dist=loadscope(by module)
- --dist=loadfile(by filename)
It would be exactly what I need to have kind of a --dist=loaddirectory, to grab all the tests in a certain directory(e.g for each dir in a dir given as a path to all the test suits)
For sure I can try to launch pytest several times each time passing a suite directory, but afraid it would have bad consequences for performance as in this case there are several instances of the pytest running rather than several threads.
So could anybody please advise if you are familiar with something that can help to tackle the requirements?
Any help or ideas, or even directions to dig are much appreciated.
I have a soapui framework which is modular. This means that I can execute test cases based upon business operations which are organized into different suites. With this in mind, I will need data from other test cases to use in my current test case (which is in a different suite). To accomplish this, I use a Run TestCase step in my current test case which runs the test case in suite 1 and brings the needed data into my current test case (suite 2) via project properties. After I run the current test case, I need the project properties to be cleared. I have the groovy code to do that. Here’s the issue: Since this is modular, I need to ONLY clear the project properties after the CURRENT test case is run. Using a teardown script within the test case level, isn’t working because it will always clear the project properties EVEN IF this is not the current test case being run. Meaning, my current suite is suite 2. And all the test cases in suite 2 have a teardown script that removes the project properties. When I run a test case in suite 3, and need data from a test case in suite 2, the properties will not be present due to the teardown scripts found in suite 2 (at the test case level). Again, I only need it to clear when the last step is run from the current test case, but not effect any other test cases when doing the modular execution. I hope that makes sense.
As a side note, this framework allows me to test business operations by suite for ad hoc testing. It also allows me to run a full regression from beginning to end (testing all suites in a row). I need the solution to not ruin the full regression run as well.
Any ideas on how to do this?
In order to do this I had to create a setup and tear down script at every level: Project, Suite, and Test Case.
Within the setup script, I created a variable called Is_Running. I then create an if statement which says: If “Is_Running” is NULL, then fill that variable with the name of the project, suite, or test case that is currently being executed. For example, if I’m executing at the project level, this code first checks to see if there is anything in the container Is_running, and if not it writes the project name in that variable.
Then I use the teardown script in each level which says that if the Is_Running variable is equal to the current name of what ever level I’m running, then erase the project properties. This ensures that the project properties are only erased once the current level is finished executing and not during the middle of a test (when using other suites).
For example: If I start my testing at the suite level, and I choose to run “Suite3”, the setup script will write “Suite3” in the Is_Running variable. Once Suite3 engages Suite2 to run the needed test cases, Suite2’s setup script see’s that the Is_Running variable is not null so it does NOT write it’s name to the Is_Running container. As such, the Suite2 teardown script does not erase the project properties since the name does not match. Once Suite3 has completed all it’s test steps, the teardown script sees that the Is_Running is filled with Suite3, so it deletes the project properties.
This approach allows me to run the project at any level and for the project properties to be deleted only after the current suite is finished running. I needed to know groovy well enough to do all the work mentioned above, but the approach is what I was looking for in this question. If you know of a less complicated way, please leave me a note!
I use the AAA syntax (Arrange, Act, Assert) in all my automatic tests (Unit tests, system tests, etc).
Recently I started to write Coded UI tests. Now I wonder whether AAA syntax fits here. Unlike unit tests, where each test has one act and assert (or more than one assert) and I can have hundreds of them which will run less than couple of minutes, the Coded UI tests will run much longer. So if I write coded UI tests the same way I write my unit tests it will take them couple of hours (if not longer) to run.
If I compare Coded UI tests with manual UI tests then the menual tests don't use the AAA syntax in order to save time (not doing the same 'Arrange' action over and over just to check a field's value after another click).
What do you do in your apps? how do you recommend write Coded UI tests?
Yep. Use the same approach here also. A manual tester does a lot of verification in a single test. When automating it is better to split the multiple verifications test case to smaller test case and do minimum assertion per test case only. This will make your test case maintenance easy in future. Its not a good practice to proceed with another set of AAA once you have already done an Assertion in a test method.
Time is not an issue. UI automation is supposed to run slow. CodedUI tests are usually run on test lab having test controller and test agents. Here your hundreds of test will run in parallel on all test agents, thus reducing the overall execution time.
My CodedUI test method goes like this:
[TestMethod]
public void VerifyWhenThisThenThis()
{
// Prepare test data, Perform prerequisite action
// Do test
// One or more assertions to verify one major requirement only.
// If other major verification exist then split the test case
// and write a new test method for it.
}
If test case is huge then ask the manual tester to split it(or split it yourself informing the tester). Maintain a separate automation test pack having short test cases than the one manual testers have.
I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.