VSTest: Order the execution of test assemblies - vstest

Our codebase has more than 100 projects with tests in each. Some test assemblies take more time while some other assemblies are taking less time for the execution of the tests.
The Azure DevOps Server is running our whole test suit in parallel, which makes it really fast.
But the problem is, that the long running tests are started in the middle of the testrun, which has the effect, that the whole testrun will be longer.
Is there a way, to influence the order of how and when the test assemblies are started? I want to start the long running test assemblies first and after that the fast test assemblies.

Since you are running the Test in parallel, you could try to use the Based on past running time of tests option in Visual Studio Test task.
According to this doc about Parallel test:
This setting considers past running times to create slices of tests so that each slice has approximately the same running time. Short-running tests will be batched together, while long-running tests will be allocated to separate slices.
This option allows tests to be run in groups based on running time. Finally , each group will be completed in a similar time.
Hope this helps.

We have achieved this by arranging the project-folders so they sort to give the longest running test assemblies first. You can see the order that VSTest finds the assemblies in the Azure DevOps output. From there you can rename folder to affect the order.
It would be nice if there was another way to effect this.

Related

Is running test case that comes from different repository possible for webdriverio/node?

If there are two separated repositories including webdriverio/node automation tests for different projects A and B, but some of the tests from project A require to run tests that covers area from project B, can we perform a test call inside test belonging to project A, that would run test from the project B, located on other repository?
I know for this particular case the quickest solution would be just copy/paste the necessary code, but I'm wondering whether I'm able not to duplicate the test code by running particular test cases from other repositories.
Well, you can creates testcases as functions and after that you can import it into your project A/B/whatever.
///// project A
function loginTestcases(){
describe('Login', ()=>{
it('Login with valid credentials', ()=>{
//some tests
})
})
}
///// project B
// use it in your another project as
logiTestcases();
But personally I don't recommend this approach. It isn't clear what you test, what body testcase has etc. Also your test should be independent - so no dependencies on other testcases.
Check this:
Rule 10: Write Independent and Isolated Tests
An important methodology of test authoring is creating self contained, independent flows. This allows to run tests in high parallelism, which is crucial for scaling the test suites. If, for example, you have 1,000 tests that run for a minute each, running them one by one will take more than 16 hours. Minutes, at full concurrency, can cut this testing time to 1 minute.
https://devops.com/10-rules-for-writing-automated-tests/

Speed up Meteor test refresh

When testing a Meteor app, I notice that Meteor does a lot of repeated tasks, such as
Downloading missing packages (several seconds)
It seems that it would be more efficient to check for updated package.json and .meteor/versions before "downloading missing packages", especially when the only changes are within unit tests or other application-specific code (i.e. no new imports).
Building web.cordova (which I do not use)
I am pretty sure that specifying the target for testing is possible, so the project is only build for web.browser, for example.
If there are errors, everything is built and executed three times (and failing three times).
When a test fail, why does it have to try again with the exact same code? Is there any use case where this makes sense?
Right now, every time a test module changes, it's several seconds before the tests are ran again because of all these tasks. Is there any way to optimize this and make it more efficient and faster?

Applying BDD testing to batch scenarios?

I'm trying to apply BDD practices to my organization. I work in a bank where the nightly batch job is a huge orchestration multi-system flow of batch jobs running and passing data between one another.
During our tests, interactive online tests probably make up only 40-50% of test scenarios while the rest are embedded inside the batch job. As an example, the test scenario may be:
Given that my savings account has a balance of $100 as of 10PM
When the nightly batch is run at 11PM
Then at 3AM after the batch run is finished, I should come back and see that I have an additional accrued interest of $0.001.
And the general ledger of the bank should have an additional entry for accrued interest of $0.001.
So as you can see, this is an extremely asynchronous scenario. If I were to use Cucumber to trigger it, I can probably create a step definition to insert the $100 balance into the account by 10PM, but it will not be realistic to use Cucumber to trigger the batch to be run at 11PM as batch jobs are usually executed by operators using their own scheduling tools such as Control-M. And having Cucumber then wait and listen a few hours before verifying the accrued interest, I'm not sure if I'll run into a timeout or not.
This is just one scenario. Batch runs are very expensive for the bank and we always tack on as many scenarios as possible to ride on a single batch run. We also have aging scenarios where we need to run 6 months of batch just to check whether the final interest at the end of a fixed deposit term is correct or not (I definitely cannot make Cucumber wait and listen for that long, can I?)
My question is, is there any example where BDD practices were applied to large batch scenarios such as these? How would one approach this?
Edit to explain why I am not targeting to execute isolated test scenarios where I am in control:
We do isolated scenarios in one of the test levels (we call it Systems Test in my bank) and BDD indeed does work in that context. But eventually, we need to hit a test level that has an entire end-to-end environment, typically in SIT. In this environment, it is a criteria for multiple test scenarios to be run in parallel, none of which have complete control over the environment. Depending on the scope of the project, this environment may run up to 200 applications. So customer channels such as Internet Banking will run transactional scenarios, whiles at the core banking system, scenarios such as interest calculation, automatic transfers etc will be executed. There will also be accounting scenarios where a general ledger system consolidates and balances all the accounts in the environment. To do manual testing in this environment frequently requires at least 30-50 personnel executing transactions and checking on results.
What I am trying to do is to find a way to leverage on a BDD framework to automate test execution and capture the results so that we do not have to manually track them all in the environment.
It sounds to me as if you are not in control over the execution of the scenario.
It is obviously so that waiting for a couple of hours before validating a result is a not a great idea.
Is it possible to extract just the part of the batch that is interesting in this scenario? If that is possible, then I would not expect the execution time to 4 - 6 hours.
If it isn't possible to execute the desired functionality in isolation, then you have a problem regarding test-ability of your system. This is very common and something you really want to address. If the only way to test is to run the entire system, then you are not able to confidently say that it is working properly since all combinations that need testing are hard, sometimes even impossible, to execute.
Unfortunately, there doesn't seem to exist a quick fix. You need to be in a position where you are able to verify small parts of the system in order to verify them fast and reliably. And it doesn't matter if you are using Cucumber or any other tool to for the verification, all tools will have the same issue.
One approach you might consider would be to have a reporting process that queries the results of each batch run. It would then store the results you were interested in (i.e. those from your tests) in to a test analysis database.
I'm assuming that each batch run has a unique identifier. This identifier would be used as the key for the test results.
Here is an example of how it might work:
We know when the batch runs are finished (say this is at 4am). We schedule a reporting job to start after batch run completion (say at 5am) that analyses the test accounts.
The reporting job looks at Account X and Account Y. It records the amount of money in their account in a table alongside the unique identifier for the batch run. This information is stored in a test results database.
A separate process matches up test scenarios with test results. It knows test scenario 29 was tied to batch run ZZ20 and so goes looking in the test results database for the analysis from batch run ZZ20.
In the morning the test engineer checks the results of the run. They see that test scenario 29 failed as there was only £100 in Account X rather than the £100.001 that was expected.
This setup would allow you to synchronously process asynchronous batch runs. It would be challenging to configure though, as you would need to do a lot of automation around reporting and linking test scenarios with test results.

How to order TestClasses in MSTest

I am writing selenium test cases of workflow application.
I am having two diffrent TestClasses in Mstest Project, Let say ClassA,ClassB,ClassC and so on.
How to order Execution of TestClasses in Mstest.
Thanks
You can't control the execution order - the test should be able to run in any order. They should not depend on any global state - that will only lead to pain in the end. Try to remove all dependencies between tests and test classes.

Is AAA a good practice for Coded UI tests?

I use the AAA syntax (Arrange, Act, Assert) in all my automatic tests (Unit tests, system tests, etc).
Recently I started to write Coded UI tests. Now I wonder whether AAA syntax fits here. Unlike unit tests, where each test has one act and assert (or more than one assert) and I can have hundreds of them which will run less than couple of minutes, the Coded UI tests will run much longer. So if I write coded UI tests the same way I write my unit tests it will take them couple of hours (if not longer) to run.
If I compare Coded UI tests with manual UI tests then the menual tests don't use the AAA syntax in order to save time (not doing the same 'Arrange' action over and over just to check a field's value after another click).
What do you do in your apps? how do you recommend write Coded UI tests?
Yep. Use the same approach here also. A manual tester does a lot of verification in a single test. When automating it is better to split the multiple verifications test case to smaller test case and do minimum assertion per test case only. This will make your test case maintenance easy in future. Its not a good practice to proceed with another set of AAA once you have already done an Assertion in a test method.
Time is not an issue. UI automation is supposed to run slow. CodedUI tests are usually run on test lab having test controller and test agents. Here your hundreds of test will run in parallel on all test agents, thus reducing the overall execution time.
My CodedUI test method goes like this:
[TestMethod]
public void VerifyWhenThisThenThis()
{
// Prepare test data, Perform prerequisite action
// Do test
// One or more assertions to verify one major requirement only.
// If other major verification exist then split the test case
// and write a new test method for it.
}
If test case is huge then ask the manual tester to split it(or split it yourself informing the tester). Maintain a separate automation test pack having short test cases than the one manual testers have.

Resources