Spock Framework: Skipping Failed Test Case - groovy

I want the execution to skip the test which are failing and move to the next test in the class. Since i am using #Stepwise the execution is getting stopped once any test is getting failed.
I have some test which should pass and if its failing execution should stop and other test should skip if its getting failed. Please advise how it can be possible using Spock/Groovy Framework.

You can use
#Unroll
annotation which indicates that iterations of a data-driven feature should be made visible as separate features to the outside world (IDEs, reports, etc.). So even if your one case is being failed the execution will move forward to next case. then again there are several annotations that can serve your purpose

I'm not sure I understand your question, but Spock provides several annotations that allow failing tests to be ignored, these include
#Ignore Indicates that a specification or feature method should not be run.
#IgnoreIf Ignores the annotated spec or feature if the given condition holds.
#IgnoreRest Indicates that all feature methods except the ones carrying this annotation should be ignored.

Related

Separate executions of BDD scenario outline examples in to different TestRail test cases (per example)

For reporting in to TestRail on automated BDD (cucumber-jvm) runs are using the Jenkins test rail plugin https://github.com/jenkinsci/testrail-plugin and we are getting false positives for test cases from scenario outlines.
The default implementation logs scenario outline example executions as multiple executions of the same test case in the same run. If the last example to run passed then the test case is passed for the run, even if all other examples actually failed.
Has anyone experienced this behaviour and did you find a way to change it so if any fail then the test case is failed or to list each example execution as a different test case?
I would report this behaviour to the authors of the plugin. The behaviour you describe is clearly very wrong.

What is difference between Test case and Test case(if we are not taking automation into consideration(

If Automation is excluded and from manual testing point of view, what is diffrerence between Test Strategy, Test Scenario, Test case and Test Script
**
Test Strategy
A Test Strategy document is a high level document and normally developed by project manager. This document defines “Software Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.
Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects, there is one Test Strategy document and different number of Test Plans for each phase or level of testing.
Components of the Test Strategy document
1)Scope and Objectives
2)Business issues
3)Roles and responsibilities
4)Communication and status reporting
5)Test deliverability
6)Industry standards to follow
7)Test automation and tools
8)Testing measurements and metrices
9)Risks and mitigation
10)Defect reporting and tracking
11)Change and configuration management
12)Training plan
**
Test Scenario
A scenario is a story that describes a hypothetical situation. In testing, you check how the program copes with this hypothetical situation.
The ideal scenario test is credible, motivating, easy to evaluate, and complex.
Scenarios are usually different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system tests.
A Scenario is any functionality that can be tested. It is also called Test Condition ,or Test Possibility.
**
Test Cases
In software engineering, a test case is a set of conditions or variables under which a tester will determine if a requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirement has sub requirements. In that situation, each sub requirement must have at least one test case .
A test case is also defined as a sequence of steps to test the correct behavior of a functionality/feature of an application.
A sequence of steps consisting of actions to be performed on the system under test. (These steps are sometimes called the test procedure or test script). These actions are often associated with some set of data (preloaded or input during the test). The combination of actions taken and data provided to the system under test leads to the test condition. This condition tends to produce results that the test can compare with the expected results; I.e assess quality under the given test condition. The actions can be performed serially, in parallel, or in some other combination of consecution.
**
Test Script
Test Script is a set of instructions (written using a scripting/programming language) that is performed on a system under test to verify that the system performs as expected. Test scripts are used in automated testing.
Sometimes, a set of instructions (written in a human language), used in manual testing, is also called a Test Script but a better term for that would be a Test Case.
Test Scenario means " What to be tested" and test case means " How to be tested".
Test case: It consist of test case name, Precondition, steps / input condition, expected result.
Test Scenario: Test scenario consists of a detailed test procedure. We can also say that a test scenario has many test cases associated with it. Before executing the test scenario we need to think of test cases for each scenario.
Test Script: A Test Script is a set of instructions (written using a programming language) that is performed on a system under test to verify that the system performs as expected.
Test scripts is the term used when referring to automated testing. When you're creating a test script, you are using an automation tool to create your script.
Test strategy
outlines the testing approach and everything else that surrounds it. It is different from the test plan, in the sense that a Test strategy is only a sub set of the test plan. It is a hard core test document that is to an extent generic and static. There is also an argument about at what levels test strategy or plan is used- but I really do not see any discerning difference.
Example: Test plan gives the information of who is going to test at what time. For example: Module 1 is going to be tested by “X tester”. If tester Y replaces X for some reason, the test plan has to be updated.
On the contrary, test strategy is going to have details like – “Individual modules are to be tested by test team members. “ In this case, it does not matter who is testing it- so it’s generic and the change in the team member does not have to be updated, keeping it static.
Test scenario
This is a one line pointer that testers create as an initial, transitional step into the test design phase. This is mostly a one line definition of “What” we are going to test with respect to a certain feature. Usually, test scenarios are an input for the creation of test cases. In agile projects, Test scenarios are the only test design outputs and no test cases are written following these. A test scenario might result in multiple tests.
Examples test scenarios:
Validate if a new country can be added by the Admin
Validate if an existing country can be deleted by the admin
Validate if an existing country can be updated
Test Case:
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc.
A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test Script:
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool
Test Scenarios: A high-level/simple/individual test panorama of actual system capability. We no need to define a clear step-by-step way of validation at this stage as we define test scenarios at very early stages of software life cycle. This will not be considered for test plan as this is a non-defined item in terms resource allocation.
Test Case: Is a document which consists of system specific prerequisites, but no step-by-step validation. In test case traceability we use a test case document against requirements. This is how we will define the test coverage matrix against requirements. In most of the cases, a test case will cover multiple test scenarios. A test case will carry complexity. Test cases are used for calculation of testing efforts for a particular release with respect to code version.
Test Script(without Automation/programming language context): Every one aware of the fact that a test script is an automation program which is uniquely mapped to a test case. But without automation as well we can use this term especially when you are using Rational Quality Manager(RQM) as your test repo.
1.When a test case has multiple versions and the testing team needs to maintain all test case versions against multiple system code versions.In this case, one test case will have multiple test scripts(one for each version).
2.When a test case produces different results in different environments(Operating system or technology.. etc), a test case will be mapped to multiple test scripts which have the expected results change but entire test case remains same.
In either of the above cases, while creating test plan we need to first decide on which version of the test case(in other terms, test script) for execution based on code version or the environment.
Hope this helps to answer your question.

Can YUI3 Test asserts display the line number of failure?

I just started using YUI3 Test module (http://yuilibrary.com/yui/docs/test/).
I have testcases with many asserts that verify state. If one of the assert fails, the TestConsole indicates an assert failed, but doesn't indicate which of the many asserts in the test failed. It would be great to have the failure message report the line number.
The browser exception actually contains the JS failure line number, but the YUI3 Test class filters this out and throws its own exception, which doesn't seem to contain the line number. Is there an easy way to add this reporting while still taking advantage of the YUI3 Test class as a harness??
I will start with the tl;dr
YUI3 does not provide any intrinsic way to display the line number of a failed test. I suppose it would be possible to manipulate Error constructors such that you could interrogate them; however, the problem is that Error.lineNumber is only supported in certain browsers (I believe it is Mozilla only).
Even if that did work, you'd end up realizing that this is a bit convoluted. You'd have to always be sure to do:
throw new Error*(...);
In your calling code, you'd always have to do:
try {...} catch(e) { /* e.lineNumber */ }
And even if this all worked and you were willing to do this, I wouldn't recommend it.
The real answer
The root of the problem is that you seem to have multiple Asserts in your test methods. Developers that are trying to be pragmatic will sometimes tell you that "one assertion per test method" is unreasonable and dogmatic. It is very attractive to think that multiple assertions per test method is fine...until it isn't.
I'm sure there are times where multiple assertions are better, but I haven't seen it yet. I've been testing for years now and here is what I've found:
I've given multiple asserts per method a try, and each time I've been bitten by the problem of not knowing which assertion failed. No cargo-culting here...I've tried both, and out of the two methodologies, only one has not bitten me.
One assertion per test forces you to really think about what/how you are testing.
Reading:
Testing: One assertion per test
One Assertion Per Test

Can mocks and stubs persist between Cucumber steps?

I have an app that relies on a 3rd party API called PSC, but I want to isolate my cucumber tests from API calls to PSC.
So, I wrote a couple of cucumber steps:
When /^we pretend that PSC is up$/ do
PscV1.default_psc_connection("test user").stub!(:default_connection_is_up?).and_return(true)
end
When /^we pretend like PSC assignments exist for all subjects$/ do
PscV1.default_psc_connection("test user").stub!(:assignment_exists?).and_return(true)
end
...and what these stubs are supposed to do is make the Cucumber scenario think that the API calls are working. However, the stubs don't seem to persist between steps, so further steps in my scenario don't get the stubbed return values, they try to make an actual API call, and therefore they fail.
Is there a way to get stubs to persist at least as long as an entire scenario? I've used stubs successfully in other Cucumber tests, so I know they work in general, but this is the first time I've written a Cucumber step whose entire purpose is to provide a stub.
As far as I can tell, the answer to whether or not they persist is, quite simply, "no".
I wound up writing a combined step that did the following:
When /^I follow "([^\"]*)" while pretending that PSC is up and assignments exists for all users$/ do |link_text|
PscV1.stub!(:default_connection_is_up?).and_return(true)
PscV1.default_psc_connection("test user").stub!(:assignment_exists?).and_return(true)
click_link link_text
end
...which works. It doesn't allow me to reuse the stubs, as their own steps, unfortunately, but it works.
UPDATE You might be able to get around this limitation by assigning the stub to a class level variable, which is accessible from other steps within the same scenario.

Can TDD be a valid alternative to overkill data validation?

Consider these two data validation scenarios:
Check everything everywhere
Make sure that every method that takes one or more arguments actually checks them to ensure that they're syntactically valid.
Pros
Very fine check granularity.
If the code that is being written is for some kind of library we make sure to limit the damage that can be done if the developers that will be using it fail to provide valid data.
Cons
It's costly to always perform checks that most of the time shouldn't be needed.
It's still possible to forget to add a check every now and then.
More code is being written and hence in need of maintenance.
Make use of TDD goodness
Validate data only when it enters your code from the external world.
To make sure that internally data will be always syntactically correct, create tests that check every method that returns a value. To make sure that if valid data enters, valid data exits.
The pros and the cons are practically switched with the ones from the former approach.
As of now I'm using the first approach, but since I'm employing test driven development I thought that maybe I could go with the second one.
The advantages are clear, still, I wonder if it's as secure as the first method.
It sounds like the first method is contract driven, and one aspect of that is that you also need to verify that what you return from any public interface meets the contract.
But, I think that both approaches are valid, but very different.
TDD only partially deals with the public interface, in that it should check that every input is properly validated, unfortunately, unless you have all your validation in separate functions, to adequately test, it becomes very difficult to ensure that this function of 3 or 4 parameters is being properly tested for validity. The number of tests you have to write is quite high, in either approach.
If you are using a library, then in every function that can be called directly from the outside (outside being outside the library) then you will need to check that every input is valid, and that invalid input is handled as per the contract, either returning a null or throwing an exception. But, it must be in agreement with the documentation.
Once you have verified it, then there is no reason to force the verification on private functions as those can only be called from within the library, and you should be verifying that you are only dealing with valid data.
Lots of tests will be needed, regardless, unfortunately. All these tests do is to ensure that you don't have any surprise problems, but that should generally help justify the cost of writing and maintaining them.
As to your question, if your tests are really well written, and you ensure that all validity checks are done completely, then it should be as secure, but the risk is that if you believe it is secure and you have poorly written tests then it will actually be worse than no tests, as there is an assumption that your tests are well-written.
I would use both methods, until you know your tests are well-written then just go with TDD.
My opinion is that in the first scenario, two of your Cons outweigh everything else:
It's costly to always perform checks
that most of the time shouldn't be
needed.
More code is being written and hence
in need of maintenance.
Also, technically TDD has no bearing on this question, because it is not a testing technique. More later...
To mitigate the Cons I would strongly advocate (as I think you say) splitting the code into an outside and an inside: The outside is where all the validation occurs. Hopefully this is but a thin wrapper around the inside, to prevent GIGO. Once inside, data never needs to be validated again.
As for TDD, I would strongly advocate (as you are now doing) employing it to develop your code, with the added benefit of leaving a trail of tests that become a regression test suite. Now you will naturally develop your outside code to perform robust validation, with the promise of easily adding any checks that you might initially forget. Your inside code can be developed assuming it will only handle valid data, but TDD will still give you the confidence that it will function to spec.
I'm saying that I would go with the second approach, as I've described, independently of whether I'm developing with TDD, or not (but TDD is always my first choice).
The advantages are clear, still, I wonder if it's as secure as the first method.
This completely depends on how well you test it.
This could be just as secure, if the following two criteria are met:
Every publicly exposed means of adding data to the system are validated completely
Every internal method that translates data is completely and adequately tested
However, I question that this would be easier or that it would require less code. The amount of code required to check every public entry point is going to be very similar to the amount of code required to validate each method. You're going to need more checks in the entry points, since they'll have to check things that might otherwise be checked internally.
For the second method, you need two good sets of tests. You must not only check that
To make sure that if valid data
enters, valid data exits.
You must also check that if Invalid data enters, an exception is thrown. I suppose you still have to validate data and kick out if you have invalid data. This is really the only way if you don't want pesky ArgumentNullException s or other cryptic errors in your production application. However TDD can really toughen up the quality of all that checking (especially with Fuzz Testing).
One item is missing from your list of Pros and Cons and that is something important enough to make unit testing a much more safer method than maniac parameters checking.
You just have to consider the When and the Where.
For unit testing the when and the where are:
when: at design time
where: in a dedicated source file outside of the application code
For overkill data checking they are:
when: at runtime
where: entangled in the application source code, typically using asserts.
That is the point: code covered by unit testing detects errors at design time when you run the tests, if you are the paranoid and schizofrenic kind of tester (the bests) you write tests designed to break whatever can be, checking each data boundary and perverse input. You also use code coverage tools to ensure every branch of every alternative is tested. You have no limit : tests lies in their own files and do not clutter application. Doesn't matter if you get ten times as many test lines than the actual application code, no run time penalty, no readability penalty.
On the other hand integrated overkill testing detects errors at runtime. In the worst-case it will detects errors on the user system, where you can do nothing about it (if even you ever heard of this error happening). Also even if you are the paranoid kind you will have to limit your testing. Assertion just can't be 90 percents of the application code. It raise readability issues, maintenance, often heavy performances penalty. Where will you stop then: only checking parameters for external input ? Checking every possible or impossible inputs of inner functions ? Checking every loop invariant ? Also testing behavior when out of flow data (globals, system files, etc) is changed ? You must also be conscious that assertion code can also contain some bugs. What if the formula of an assertion perform a divide. You must ensure it will not lead by a DIVIDE-BY-ZERO error or such ?
Another problem is that in many cases you just don't know what can be done when an assertion failure. If you are at a real entry point you can return back something understandable for your user or the lib user... when you are checking innner functions

Resources