Handling Multithreading in XML files for running testcases in parallel - multithreading

I'm new with multithreading, here is my problem statement,
I have an xml file (TestCase.xml) where each tag resembles a test case something like below,
TestCase.xml
In turn, each main tag has a child-tag that links to another xml(TestStep.xml) which dictates the steps of the test case, it’s TS in the above example.
TestStep.xml
The execution always starts from the TestCase.xml based on the id provided. With this overview, I have 100 test cases in my suite & I want to execute them in parallel, i.e. execute at least 5-6 test cases at the same time. I’m not able to use external plug-ins like Testng, Junit, BDD or mavensurefire etc. After a lot of R&D we have ended up with Multithreading. I would need assistance on how to implement the same.

Related

Pytest running a certain test suite in a certain thread

The requirement is to run all the tests belonging to a suite in a certain thread.
For example having suite1, suite2, suite3 and so on.
And I'd like to make all the tests belonging to the suite1 executed in one thread(thread1), whereas tests belonging to the suite2 in another thread(thread2) and tests from suite3 in one more separate thread(thread3).
As per xdist there are only 2 options:
- --dist=loadscope(by module)
- --dist=loadfile(by filename)
It would be exactly what I need to have kind of a --dist=loaddirectory, to grab all the tests in a certain directory(e.g for each dir in a dir given as a path to all the test suits)
For sure I can try to launch pytest several times each time passing a suite directory, but afraid it would have bad consequences for performance as in this case there are several instances of the pytest running rather than several threads.
So could anybody please advise if you are familiar with something that can help to tackle the requirements?
Any help or ideas, or even directions to dig are much appreciated.

Execute SoapUI test on multi-threads

I have a SoapUI test which uses an input file to read lines as input of requests. So there is a loop which reads data and execute request and write output to file. Response times are too long, so processing of this file should be done asynchronously, but I am not sure, how SoapUI can handle this. There is file attachment in SOAP requests, which is not handled by current version of JMeter.
As per the SoapUI's documentation below, both test cases or test suites can be executed in Parallel mode.
In the case of TestSuites and TestCases these can be executed either in sequence or parallell, as configured with the corresponding toolbar buttons.
In the above image, first one in the marked image stands for sequential execution and the second one (with multiple parallel arrows) stands for Parallel execution mode.
User can select either of the one before executing the tests.
Hope this helps.
Note that SOAPUI does not allows test steps to be executed in parallel. If you need any custom execution i.e., same test case and steps to be executed in Parallel, here is sample project done for that. It can be used as reference and apply it to your case.
I understood this question as requiring the ability to call a service asynchronously due to the time it takes to process. So, by this, I mean SoapUI makes a request to a web service and instead of waiting for it, it carries on. At some point later, SoapUI receives the response.
SoapUI can handle this, I haven't tried it myself, but when reading some guides recently, I noticed it can be done.
See....
Blog Guide
SoapUI Forum
In short, it involves setting up a mock service to receive the response, which can then be validated.

Applying BDD testing to batch scenarios?

I'm trying to apply BDD practices to my organization. I work in a bank where the nightly batch job is a huge orchestration multi-system flow of batch jobs running and passing data between one another.
During our tests, interactive online tests probably make up only 40-50% of test scenarios while the rest are embedded inside the batch job. As an example, the test scenario may be:
Given that my savings account has a balance of $100 as of 10PM
When the nightly batch is run at 11PM
Then at 3AM after the batch run is finished, I should come back and see that I have an additional accrued interest of $0.001.
And the general ledger of the bank should have an additional entry for accrued interest of $0.001.
So as you can see, this is an extremely asynchronous scenario. If I were to use Cucumber to trigger it, I can probably create a step definition to insert the $100 balance into the account by 10PM, but it will not be realistic to use Cucumber to trigger the batch to be run at 11PM as batch jobs are usually executed by operators using their own scheduling tools such as Control-M. And having Cucumber then wait and listen a few hours before verifying the accrued interest, I'm not sure if I'll run into a timeout or not.
This is just one scenario. Batch runs are very expensive for the bank and we always tack on as many scenarios as possible to ride on a single batch run. We also have aging scenarios where we need to run 6 months of batch just to check whether the final interest at the end of a fixed deposit term is correct or not (I definitely cannot make Cucumber wait and listen for that long, can I?)
My question is, is there any example where BDD practices were applied to large batch scenarios such as these? How would one approach this?
Edit to explain why I am not targeting to execute isolated test scenarios where I am in control:
We do isolated scenarios in one of the test levels (we call it Systems Test in my bank) and BDD indeed does work in that context. But eventually, we need to hit a test level that has an entire end-to-end environment, typically in SIT. In this environment, it is a criteria for multiple test scenarios to be run in parallel, none of which have complete control over the environment. Depending on the scope of the project, this environment may run up to 200 applications. So customer channels such as Internet Banking will run transactional scenarios, whiles at the core banking system, scenarios such as interest calculation, automatic transfers etc will be executed. There will also be accounting scenarios where a general ledger system consolidates and balances all the accounts in the environment. To do manual testing in this environment frequently requires at least 30-50 personnel executing transactions and checking on results.
What I am trying to do is to find a way to leverage on a BDD framework to automate test execution and capture the results so that we do not have to manually track them all in the environment.
It sounds to me as if you are not in control over the execution of the scenario.
It is obviously so that waiting for a couple of hours before validating a result is a not a great idea.
Is it possible to extract just the part of the batch that is interesting in this scenario? If that is possible, then I would not expect the execution time to 4 - 6 hours.
If it isn't possible to execute the desired functionality in isolation, then you have a problem regarding test-ability of your system. This is very common and something you really want to address. If the only way to test is to run the entire system, then you are not able to confidently say that it is working properly since all combinations that need testing are hard, sometimes even impossible, to execute.
Unfortunately, there doesn't seem to exist a quick fix. You need to be in a position where you are able to verify small parts of the system in order to verify them fast and reliably. And it doesn't matter if you are using Cucumber or any other tool to for the verification, all tools will have the same issue.
One approach you might consider would be to have a reporting process that queries the results of each batch run. It would then store the results you were interested in (i.e. those from your tests) in to a test analysis database.
I'm assuming that each batch run has a unique identifier. This identifier would be used as the key for the test results.
Here is an example of how it might work:
We know when the batch runs are finished (say this is at 4am). We schedule a reporting job to start after batch run completion (say at 5am) that analyses the test accounts.
The reporting job looks at Account X and Account Y. It records the amount of money in their account in a table alongside the unique identifier for the batch run. This information is stored in a test results database.
A separate process matches up test scenarios with test results. It knows test scenario 29 was tied to batch run ZZ20 and so goes looking in the test results database for the analysis from batch run ZZ20.
In the morning the test engineer checks the results of the run. They see that test scenario 29 failed as there was only £100 in Account X rather than the £100.001 that was expected.
This setup would allow you to synchronously process asynchronous batch runs. It would be challenging to configure though, as you would need to do a lot of automation around reporting and linking test scenarios with test results.

Is AAA a good practice for Coded UI tests?

I use the AAA syntax (Arrange, Act, Assert) in all my automatic tests (Unit tests, system tests, etc).
Recently I started to write Coded UI tests. Now I wonder whether AAA syntax fits here. Unlike unit tests, where each test has one act and assert (or more than one assert) and I can have hundreds of them which will run less than couple of minutes, the Coded UI tests will run much longer. So if I write coded UI tests the same way I write my unit tests it will take them couple of hours (if not longer) to run.
If I compare Coded UI tests with manual UI tests then the menual tests don't use the AAA syntax in order to save time (not doing the same 'Arrange' action over and over just to check a field's value after another click).
What do you do in your apps? how do you recommend write Coded UI tests?
Yep. Use the same approach here also. A manual tester does a lot of verification in a single test. When automating it is better to split the multiple verifications test case to smaller test case and do minimum assertion per test case only. This will make your test case maintenance easy in future. Its not a good practice to proceed with another set of AAA once you have already done an Assertion in a test method.
Time is not an issue. UI automation is supposed to run slow. CodedUI tests are usually run on test lab having test controller and test agents. Here your hundreds of test will run in parallel on all test agents, thus reducing the overall execution time.
My CodedUI test method goes like this:
[TestMethod]
public void VerifyWhenThisThenThis()
{
// Prepare test data, Perform prerequisite action
// Do test
// One or more assertions to verify one major requirement only.
// If other major verification exist then split the test case
// and write a new test method for it.
}
If test case is huge then ask the manual tester to split it(or split it yourself informing the tester). Maintain a separate automation test pack having short test cases than the one manual testers have.

Cucumber: Each feature passes individually, but not together

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Resources