I just started using YUI3 Test module (http://yuilibrary.com/yui/docs/test/).
I have testcases with many asserts that verify state. If one of the assert fails, the TestConsole indicates an assert failed, but doesn't indicate which of the many asserts in the test failed. It would be great to have the failure message report the line number.
The browser exception actually contains the JS failure line number, but the YUI3 Test class filters this out and throws its own exception, which doesn't seem to contain the line number. Is there an easy way to add this reporting while still taking advantage of the YUI3 Test class as a harness??
I will start with the tl;dr
YUI3 does not provide any intrinsic way to display the line number of a failed test. I suppose it would be possible to manipulate Error constructors such that you could interrogate them; however, the problem is that Error.lineNumber is only supported in certain browsers (I believe it is Mozilla only).
Even if that did work, you'd end up realizing that this is a bit convoluted. You'd have to always be sure to do:
throw new Error*(...);
In your calling code, you'd always have to do:
try {...} catch(e) { /* e.lineNumber */ }
And even if this all worked and you were willing to do this, I wouldn't recommend it.
The real answer
The root of the problem is that you seem to have multiple Asserts in your test methods. Developers that are trying to be pragmatic will sometimes tell you that "one assertion per test method" is unreasonable and dogmatic. It is very attractive to think that multiple assertions per test method is fine...until it isn't.
I'm sure there are times where multiple assertions are better, but I haven't seen it yet. I've been testing for years now and here is what I've found:
I've given multiple asserts per method a try, and each time I've been bitten by the problem of not knowing which assertion failed. No cargo-culting here...I've tried both, and out of the two methodologies, only one has not bitten me.
One assertion per test forces you to really think about what/how you are testing.
Reading:
Testing: One assertion per test
One Assertion Per Test
Related
In Behavior Driven Development style of writing automated functional tests, it is generally understood that Givens should be the pre-condition that the system must be in, in order to begin the test, When should be the user action performed and Then should be asserting whether the observed matches the expected and fail or pass the test accordingly.
Off late my team has started performing assertions in Givens and Whens too which lead me to wonder if this is a correct practice.
For e.g -
Given a user with xyz privileges is logged in
When I click on the abc tab
Then records should be displayed
Should the Given in this test actually assert that the logged in user indeed has xyz privileges or assume the user has required privileges and just perform login
Also should the When assert that tab is visible before clicking?
If "logging in" is a behaviour that's interesting* and you need examples to illustrate it** then it should be a "When", with the context in which it happens being the "Given", and the outcome that results being the "Then".
This is the case for any behaviour you need to illustrate with an example.
However, sometimes it can be useful to make assertions in a Given, just to check that the context really is set up properly. Sometimes when people start adopting BDD the environments can be a bit flaky, and it's handy to know if it's your scenario finding a bug that made it fail, or something earlier in the process. So for that reason, you might find assertions there.
The Given doesn't concern itself with how the system got into that state, though. If it has assertions, it should merely be to check that it is.
Another form I've seen is a check that the system is in the correct state for the context, with corrective action if it isn't.
Note that these are largely interim patterns. They can be helpful while teams are adopting BDD and getting their pipelines and automated deployments into shape.
Assertions which check the results of the "When" are part of the outcome, so part of the "Then". I can't imagine a case where you'd need to check the results of a "When" without it being an outcome. If you've got one, please give me an example.
We discourage using clicking and UI details in scenarios. Work out what you're trying to achieve, and do that. Hide the clicking under the covers.
Most of the time automated scenarios aren't actually there to catch bugs; they're living documentation that helps people think about what they're trying to achieve and what the system already does, thus encouraging good design and preventing bugs in the first place.
I'd say something like "navigate to the ABC tab" and just do it; you'll get a relevant exception if it isn't there, and that won't happen as often as people reading the scenario will.
* It's logging in. It probably isn't.
** It's logging in. You probably don't.
Assertions in Givens and Whens are generally an indicator of immaturity of step definitions. So I might put one in a step that I am working on, but I wouldn't keep it there for very long.
I'd implement your step
Given a user with xyz privileges is logged in
as something like
'Given a user with xyz privileges is logged in' do
user = create_user(privileges: xyz)
login_as user: user
end
I would expect that the create_user method would be trusted pretty quickly and would not need an assertion. Same for the login_as method. (if methods like this aren't working properly you'd expect many scenarios to be broken)
Notice how this code makes it clear that there are two things going on and provides/uses an api that you would expect many other stepdefs to use. And how any assertion you might want to keep really belongs in the helper methods not in the step def.
There are no strict rules however in my experience I find it handy to define that there are no assertions there so it is clear what is actually tested and the test itself will run faster.
In the case if your abc tab is not displayed it shouldn't be clickable and then the test will fail either when identifying the object to click or when performing the next step(depends on the tools and method you use).
However you should make sure that the actual implementation of the test is not cheating and working with internals that are able to trigger a click even if the actual component isn't.
There is another point about the Given, there normally it is recommended to set up the environment for the test. This means you make sure that in your system there is a user and that user has been logged in. It makes no meaning to very that as you set it up, however if any failure occur you should fail fast to know what is wrong.
Generally I would be careful about using assertions in the Given or When steps in the automation code.
However, I use login steps like this all over the place in my code as - when used correctly - can turn the step into a piece of context rather than an assertion itself.
For instance:
Is this user logged in?
Yes => Do Nothing.
No => Log out, then Log in as this user
In the example you gave, we know that we need the specific user to be logged in for this scenario to work, however we do not know if there is a different user logged in (other scenarios may have been run before it). If you use this step as a check to make sure that the correct user is logged in (as in my example), it is part of the context and it will speed up run time of the automation as you won't have to log in before and log out after every scenario.
I want the execution to skip the test which are failing and move to the next test in the class. Since i am using #Stepwise the execution is getting stopped once any test is getting failed.
I have some test which should pass and if its failing execution should stop and other test should skip if its getting failed. Please advise how it can be possible using Spock/Groovy Framework.
You can use
#Unroll
annotation which indicates that iterations of a data-driven feature should be made visible as separate features to the outside world (IDEs, reports, etc.). So even if your one case is being failed the execution will move forward to next case. then again there are several annotations that can serve your purpose
I'm not sure I understand your question, but Spock provides several annotations that allow failing tests to be ignored, these include
#Ignore Indicates that a specification or feature method should not be run.
#IgnoreIf Ignores the annotated spec or feature if the given condition holds.
#IgnoreRest Indicates that all feature methods except the ones carrying this annotation should be ignored.
I am using cucumber with protractor.
Is is possible in Cucumber to have for the same method more than one annotation??
For example something like this:
this.Given(/^I log in as user '([^']*)' with password '([^']*)'$/
this.When(/^I log in as user '([^']*)' with password '([^']*)'$/, function(username, password)
{
}
From Cucumbers perspective, there is no difference betwen Given and Then. The different keywords are there just to enhance the readbility of the .feature file. When you implement the steps, you can choose to use any of them.
Personally, I would never consider two different annotations for the same method. One is sufficient. The place where it matters is in the scenario and there I would use whatever I need.
At the same time, I am a but interested in why you describe you system using one Given and one Then step that actually are the same thing. The Given is where you prepare the system under test, the Then is where you assert that the expected outcome has occured. It feels surprising to me that they are actually the same execution in your case. Maybe there is a reason, but it seems strange to me at the moment.
I have a feature file A.feature which generates a number in the response body. Now, I have to capture that text/number and then pass it as test data to another feature file.Do we need to write step definition or is there any other way?Please suggest.
Generally, you should not do that. In fact you should try to make your test cases totally independent from each other. It's a bad sign if a single code change breaks many of your tests (in your case, whenever the first feature breaks, the second one would as well.) It's also a bad sign that, as a starting point, your second feature needs a special response that is not easy to construct.
Consider these two data validation scenarios:
Check everything everywhere
Make sure that every method that takes one or more arguments actually checks them to ensure that they're syntactically valid.
Pros
Very fine check granularity.
If the code that is being written is for some kind of library we make sure to limit the damage that can be done if the developers that will be using it fail to provide valid data.
Cons
It's costly to always perform checks that most of the time shouldn't be needed.
It's still possible to forget to add a check every now and then.
More code is being written and hence in need of maintenance.
Make use of TDD goodness
Validate data only when it enters your code from the external world.
To make sure that internally data will be always syntactically correct, create tests that check every method that returns a value. To make sure that if valid data enters, valid data exits.
The pros and the cons are practically switched with the ones from the former approach.
As of now I'm using the first approach, but since I'm employing test driven development I thought that maybe I could go with the second one.
The advantages are clear, still, I wonder if it's as secure as the first method.
It sounds like the first method is contract driven, and one aspect of that is that you also need to verify that what you return from any public interface meets the contract.
But, I think that both approaches are valid, but very different.
TDD only partially deals with the public interface, in that it should check that every input is properly validated, unfortunately, unless you have all your validation in separate functions, to adequately test, it becomes very difficult to ensure that this function of 3 or 4 parameters is being properly tested for validity. The number of tests you have to write is quite high, in either approach.
If you are using a library, then in every function that can be called directly from the outside (outside being outside the library) then you will need to check that every input is valid, and that invalid input is handled as per the contract, either returning a null or throwing an exception. But, it must be in agreement with the documentation.
Once you have verified it, then there is no reason to force the verification on private functions as those can only be called from within the library, and you should be verifying that you are only dealing with valid data.
Lots of tests will be needed, regardless, unfortunately. All these tests do is to ensure that you don't have any surprise problems, but that should generally help justify the cost of writing and maintaining them.
As to your question, if your tests are really well written, and you ensure that all validity checks are done completely, then it should be as secure, but the risk is that if you believe it is secure and you have poorly written tests then it will actually be worse than no tests, as there is an assumption that your tests are well-written.
I would use both methods, until you know your tests are well-written then just go with TDD.
My opinion is that in the first scenario, two of your Cons outweigh everything else:
It's costly to always perform checks
that most of the time shouldn't be
needed.
More code is being written and hence
in need of maintenance.
Also, technically TDD has no bearing on this question, because it is not a testing technique. More later...
To mitigate the Cons I would strongly advocate (as I think you say) splitting the code into an outside and an inside: The outside is where all the validation occurs. Hopefully this is but a thin wrapper around the inside, to prevent GIGO. Once inside, data never needs to be validated again.
As for TDD, I would strongly advocate (as you are now doing) employing it to develop your code, with the added benefit of leaving a trail of tests that become a regression test suite. Now you will naturally develop your outside code to perform robust validation, with the promise of easily adding any checks that you might initially forget. Your inside code can be developed assuming it will only handle valid data, but TDD will still give you the confidence that it will function to spec.
I'm saying that I would go with the second approach, as I've described, independently of whether I'm developing with TDD, or not (but TDD is always my first choice).
The advantages are clear, still, I wonder if it's as secure as the first method.
This completely depends on how well you test it.
This could be just as secure, if the following two criteria are met:
Every publicly exposed means of adding data to the system are validated completely
Every internal method that translates data is completely and adequately tested
However, I question that this would be easier or that it would require less code. The amount of code required to check every public entry point is going to be very similar to the amount of code required to validate each method. You're going to need more checks in the entry points, since they'll have to check things that might otherwise be checked internally.
For the second method, you need two good sets of tests. You must not only check that
To make sure that if valid data
enters, valid data exits.
You must also check that if Invalid data enters, an exception is thrown. I suppose you still have to validate data and kick out if you have invalid data. This is really the only way if you don't want pesky ArgumentNullException s or other cryptic errors in your production application. However TDD can really toughen up the quality of all that checking (especially with Fuzz Testing).
One item is missing from your list of Pros and Cons and that is something important enough to make unit testing a much more safer method than maniac parameters checking.
You just have to consider the When and the Where.
For unit testing the when and the where are:
when: at design time
where: in a dedicated source file outside of the application code
For overkill data checking they are:
when: at runtime
where: entangled in the application source code, typically using asserts.
That is the point: code covered by unit testing detects errors at design time when you run the tests, if you are the paranoid and schizofrenic kind of tester (the bests) you write tests designed to break whatever can be, checking each data boundary and perverse input. You also use code coverage tools to ensure every branch of every alternative is tested. You have no limit : tests lies in their own files and do not clutter application. Doesn't matter if you get ten times as many test lines than the actual application code, no run time penalty, no readability penalty.
On the other hand integrated overkill testing detects errors at runtime. In the worst-case it will detects errors on the user system, where you can do nothing about it (if even you ever heard of this error happening). Also even if you are the paranoid kind you will have to limit your testing. Assertion just can't be 90 percents of the application code. It raise readability issues, maintenance, often heavy performances penalty. Where will you stop then: only checking parameters for external input ? Checking every possible or impossible inputs of inner functions ? Checking every loop invariant ? Also testing behavior when out of flow data (globals, system files, etc) is changed ? You must also be conscious that assertion code can also contain some bugs. What if the formula of an assertion perform a divide. You must ensure it will not lead by a DIVIDE-BY-ZERO error or such ?
Another problem is that in many cases you just don't know what can be done when an assertion failure. If you are at a real entry point you can return back something understandable for your user or the lib user... when you are checking innner functions