Gauge test run : skip subsequent scenarios if one scenario fails in a spec file - getgauge

With gauge run specs, it runs all scenarios even if any fails- that works in most of the cases, however, I need a spec execution to stop if it fails on any scenario.
For example, a spec has the following scenarios
A
B
C
if A fails it should not execute B, C and mark the spec as fail.

Gauge encourage scenarios to be independent of each other. If scenario A is failed it should not break the execution of scenario B and C. Read the Gauge FAQ why-we-cannot-skip-all-tests-dynamically-during-a-gauge-run-if-there-is-a-test-failure

Related

Handling Multithreading in XML files for running testcases in parallel

I'm new with multithreading, here is my problem statement,
I have an xml file (TestCase.xml) where each tag resembles a test case something like below,
TestCase.xml
In turn, each main tag has a child-tag that links to another xml(TestStep.xml) which dictates the steps of the test case, it’s TS in the above example.
TestStep.xml
The execution always starts from the TestCase.xml based on the id provided. With this overview, I have 100 test cases in my suite & I want to execute them in parallel, i.e. execute at least 5-6 test cases at the same time. I’m not able to use external plug-ins like Testng, Junit, BDD or mavensurefire etc. After a lot of R&D we have ended up with Multithreading. I would need assistance on how to implement the same.

How to run specifications based on the order of the tags inputtted

Example:
- Consider I have two specs (Spec 1 and Spec 2).
- In both the specs I have few scenarios and each scenario has a tag representing the stages it has to run. Say spec1 has scenarios relevant to "STAGE_1" and "STAGE_2". And same is the case in "Spec 2".
Now, I want to run all scenarios across all specifications (spec 1 and spec 2) in a particular order.
The order I want is
a. Run all the "STAGE1" scenarios first and then
b. Run all the "STAGE2" scenarios.
Further Constraints:
I do have requirement to place these in a seperate specification because
- I may choose to run at a specification without bothering the stage level sorting
- I also want the "STAGE1" to set some data in the store, which can be consumed by the steps in the next stage say "STAGE2".
So, In effect, I see my requirement is to have a command something like
gauge run specs -tags="STAGE1 | STAGE2"
but expect gauge to sort all "STAGE1" scenarios first to execute and then execute all the STAGE2 scenarios next.
Gauge does not consider tags for order of specs. Furthermore, in your example you have listed a tag expression, which can be hard to determine order from. ex. if you did !STAGE1, all it tells gauge is to ignore the tag, it becomes difficult to determine the order.
Instead if you passed in a list of spec files or directories, Gauge will try to preserve the order of execution.
By default, gauge does not guarantee any order. You'll have to use --sort flag with gauge run. Ref: https://manpage.gauge.org/gauge_run.html

Applying BDD testing to batch scenarios?

I'm trying to apply BDD practices to my organization. I work in a bank where the nightly batch job is a huge orchestration multi-system flow of batch jobs running and passing data between one another.
During our tests, interactive online tests probably make up only 40-50% of test scenarios while the rest are embedded inside the batch job. As an example, the test scenario may be:
Given that my savings account has a balance of $100 as of 10PM
When the nightly batch is run at 11PM
Then at 3AM after the batch run is finished, I should come back and see that I have an additional accrued interest of $0.001.
And the general ledger of the bank should have an additional entry for accrued interest of $0.001.
So as you can see, this is an extremely asynchronous scenario. If I were to use Cucumber to trigger it, I can probably create a step definition to insert the $100 balance into the account by 10PM, but it will not be realistic to use Cucumber to trigger the batch to be run at 11PM as batch jobs are usually executed by operators using their own scheduling tools such as Control-M. And having Cucumber then wait and listen a few hours before verifying the accrued interest, I'm not sure if I'll run into a timeout or not.
This is just one scenario. Batch runs are very expensive for the bank and we always tack on as many scenarios as possible to ride on a single batch run. We also have aging scenarios where we need to run 6 months of batch just to check whether the final interest at the end of a fixed deposit term is correct or not (I definitely cannot make Cucumber wait and listen for that long, can I?)
My question is, is there any example where BDD practices were applied to large batch scenarios such as these? How would one approach this?
Edit to explain why I am not targeting to execute isolated test scenarios where I am in control:
We do isolated scenarios in one of the test levels (we call it Systems Test in my bank) and BDD indeed does work in that context. But eventually, we need to hit a test level that has an entire end-to-end environment, typically in SIT. In this environment, it is a criteria for multiple test scenarios to be run in parallel, none of which have complete control over the environment. Depending on the scope of the project, this environment may run up to 200 applications. So customer channels such as Internet Banking will run transactional scenarios, whiles at the core banking system, scenarios such as interest calculation, automatic transfers etc will be executed. There will also be accounting scenarios where a general ledger system consolidates and balances all the accounts in the environment. To do manual testing in this environment frequently requires at least 30-50 personnel executing transactions and checking on results.
What I am trying to do is to find a way to leverage on a BDD framework to automate test execution and capture the results so that we do not have to manually track them all in the environment.
It sounds to me as if you are not in control over the execution of the scenario.
It is obviously so that waiting for a couple of hours before validating a result is a not a great idea.
Is it possible to extract just the part of the batch that is interesting in this scenario? If that is possible, then I would not expect the execution time to 4 - 6 hours.
If it isn't possible to execute the desired functionality in isolation, then you have a problem regarding test-ability of your system. This is very common and something you really want to address. If the only way to test is to run the entire system, then you are not able to confidently say that it is working properly since all combinations that need testing are hard, sometimes even impossible, to execute.
Unfortunately, there doesn't seem to exist a quick fix. You need to be in a position where you are able to verify small parts of the system in order to verify them fast and reliably. And it doesn't matter if you are using Cucumber or any other tool to for the verification, all tools will have the same issue.
One approach you might consider would be to have a reporting process that queries the results of each batch run. It would then store the results you were interested in (i.e. those from your tests) in to a test analysis database.
I'm assuming that each batch run has a unique identifier. This identifier would be used as the key for the test results.
Here is an example of how it might work:
We know when the batch runs are finished (say this is at 4am). We schedule a reporting job to start after batch run completion (say at 5am) that analyses the test accounts.
The reporting job looks at Account X and Account Y. It records the amount of money in their account in a table alongside the unique identifier for the batch run. This information is stored in a test results database.
A separate process matches up test scenarios with test results. It knows test scenario 29 was tied to batch run ZZ20 and so goes looking in the test results database for the analysis from batch run ZZ20.
In the morning the test engineer checks the results of the run. They see that test scenario 29 failed as there was only £100 in Account X rather than the £100.001 that was expected.
This setup would allow you to synchronously process asynchronous batch runs. It would be challenging to configure though, as you would need to do a lot of automation around reporting and linking test scenarios with test results.

Execute groovy code in last step of current test case in modular framework without teardown script

I have a soapui framework which is modular. This means that I can execute test cases based upon business operations which are organized into different suites. With this in mind, I will need data from other test cases to use in my current test case (which is in a different suite). To accomplish this, I use a Run TestCase step in my current test case which runs the test case in suite 1 and brings the needed data into my current test case (suite 2) via project properties. After I run the current test case, I need the project properties to be cleared. I have the groovy code to do that. Here’s the issue: Since this is modular, I need to ONLY clear the project properties after the CURRENT test case is run. Using a teardown script within the test case level, isn’t working because it will always clear the project properties EVEN IF this is not the current test case being run. Meaning, my current suite is suite 2. And all the test cases in suite 2 have a teardown script that removes the project properties. When I run a test case in suite 3, and need data from a test case in suite 2, the properties will not be present due to the teardown scripts found in suite 2 (at the test case level). Again, I only need it to clear when the last step is run from the current test case, but not effect any other test cases when doing the modular execution. I hope that makes sense.
As a side note, this framework allows me to test business operations by suite for ad hoc testing. It also allows me to run a full regression from beginning to end (testing all suites in a row). I need the solution to not ruin the full regression run as well.
Any ideas on how to do this?
In order to do this I had to create a setup and tear down script at every level: Project, Suite, and Test Case.
Within the setup script, I created a variable called Is_Running. I then create an if statement which says: If “Is_Running” is NULL, then fill that variable with the name of the project, suite, or test case that is currently being executed. For example, if I’m executing at the project level, this code first checks to see if there is anything in the container Is_running, and if not it writes the project name in that variable.
Then I use the teardown script in each level which says that if the Is_Running variable is equal to the current name of what ever level I’m running, then erase the project properties. This ensures that the project properties are only erased once the current level is finished executing and not during the middle of a test (when using other suites).
For example: If I start my testing at the suite level, and I choose to run “Suite3”, the setup script will write “Suite3” in the Is_Running variable. Once Suite3 engages Suite2 to run the needed test cases, Suite2’s setup script see’s that the Is_Running variable is not null so it does NOT write it’s name to the Is_Running container. As such, the Suite2 teardown script does not erase the project properties since the name does not match. Once Suite3 has completed all it’s test steps, the teardown script sees that the Is_Running is filled with Suite3, so it deletes the project properties.
This approach allows me to run the project at any level and for the project properties to be deleted only after the current suite is finished running. I needed to know groovy well enough to do all the work mentioned above, but the approach is what I was looking for in this question. If you know of a less complicated way, please leave me a note!

Cucumber: Each feature passes individually, but not together

I am writing a Rails 3.1 app, and I have a set of three cucumber feature files. When run individually, as with:
cucumber features/quota.feature
-- or --
cucumber features/quota.feature:67 # specifying the specific individual test
...each feature file runs fine. However, when all run together, as with:
cucumber
...one of the tests fails. It's odd because only one test fails; all the other tests in the feature pass (and many of them do similar things). It doesn't seem to matter where in the feature file I place this test; it fails if it's the first test or way down there somewhere.
I don't think it can be the test itself, because it passes when run individually or even when the whole feature file is run individually. It seems like it must be some effect related to running the different feature files together. Any ideas what might be going on?
It looks like there is a coupling between your scenarios. Your failing scenario assumes that system is in some state. When scenarios run individually system is in this state and so scenario passes. But when you run all scenarios, scenarios that ran previously change this state and so it fails.
You should solve it by making your scenarios completely independent. Work of any scenario shouldn't influence results of other scenarios. It's highly encouraged in Cucumber Book and Specification by Example.
I had a similar problem and it took me a long time to figure out the root cause.
I was using #selenium tags to test JQuery scripts on a selenium client.
My page had an ajax call that was sending a POST request. I had a bug in the javascript and the post request was failing. (The feature wasn't complete and I hadn't yet written steps to verify the result of the ajax call.)
This error was recorded in Capybara.current_session.server.error.
When the following non-selenium feature was executed a Before hook within Capybara called Capybara.reset_sessions!
This then called
def reset!
driver.reset! if #touched
#touched = false
raise #server.error if #server and #server.error
ensure
#server.reset_error! if #server
end
#server.error was not nil for each scenario in the following feature(s) and Cucumber reported each step as skipped.
The solution in my case was to fix the ajax call.
So Andrey Botalov and Doug Noel were right. I had carry over from an earlier feature.
I had to keep debugging until I found the exception that was being raised and investigate what was generating it.
I hope this helps someone else that didn't realise they had carry over from an earlier feature.

Resources