What are test case enumeration.? How to write it.? - manual-testing

Recently, while looking for a job change on manual QA, I had interviewed on regular testing concepts questions. But, in a few companies,they gave some scenario and asked to write test case enumerations for it. Is it like test steps I need to write.? As per my knowledge, enumeration means complete, ordered list of all the items in a collection, so, is it writing all the test steps with description.?

Listing all possible test case names which could be extracted out of the scenario provided and classifying them in terms of priority and positive/negative/types is test case enumeration.
Kindly comment if you need anything, here is an example for better understanding.
Enumerate test cases for Login:(Classifying priorities into P1>P2>P3)
Positive cases include:
P1-Verify the login dialog box
P1-Verify the login id
P1-Verify the password
P1-Verify the submit button
Negative cases include:
1. P3-Verify logging in with empty id and password fields
Note: Haven't covered all the test cases.

Test Enumeration orders those scripts one by one- like 1,2,3... etc present in the test suite.It is just like defining the priority with which you want to run a specific script in a test suite.

For me enumeration means give for each test case identifier which is no 1, 2, 3 etc but which can tell you something, for example in very simple project you have three modules Users, Orders, Reports you can enumerate your use cases User.Accounts.1, User.Accounts.2..., User.Roles.1, User.Roles.2, Orders.Add.1, Orders.Edit.1, Orders.Edit.2, etc.
I gave long identifiers but you can short it or even replace names by numbers.
Other way (which is even much clear) you can gave names to use cases:
User.Accounts.Add account
User.Accounts.Edit account
User.Accounts.Remove account
User.Accounts.Remove account - negative (cannot remove)
User.Roles.Add role
etc...
This helps you (and others) to see if list of test cases you planned is full or you should add some new.

Related

Gherkin: Is it correct to repeat steps?

I am reading a lot about Gherkin, and I had already read that it was not good to repeat steps, and for this it is necessary to use the keyword "Background", but in the example of this page they are repeating the same "Given" again and again, Could it be that I am doing wrong? I need to know your opinion about it:
Like with several things, this a topic that will generate different opinions. On this particular example I would have moved the "Given that I select the post" to the Background section as this seems to be a pre-requisite to all scenarios on this feature. Of course this would leave the scenarios in the feature without an actual Given section but those would be incorporated from the Background section on execution.
I have also seen cases where sometimes the decision of moving steps to the Background is a trade-off between having more or less feature files and how these are structured. For example, if there are 10 scenarios for a particular feature with a lot of similar steps between them - but there are 1 or 2 scenarios which do not require a particular step, then those 1 or 2 scenarios would have to moved into a new feature file in order to have the exact same steps on the Background section of the original feature.
Of course it is correct to keep the scenarios like this. From a tester's perspective, the Scenarios/Test cases should run independently, therefore, you can keep these tests separately for each functionality.
But in case you are doing an integration testing, then some of these test cases can be merged, thus you can cover multiple test cases in one scenario.
And the "given" statement is repeating, therefore you can put that in the background, so you don't have to call it in each scenarios.
Note: These separate scenarios will be handy when you run the scripts separately with annotation tags, when you just have to check for a specific functionality, or a bug fix.

BDD passing Recaptcha and null value - Best Practices

I have two doubts about BDD with Cucumber related to best practices.
I have a page to automate user registration.
You enter your personal data, such as: name, email, and confirmation
After that you select the options of your interest of the site (there are 10 comboboxes, can be as many as you want).
Insert a recaptcha and send.
I need to validate all cases of success, as well as failure.
So, here are my questions:
1) Page with recaptcha.
Since it is not possible to automate a recaptcha and this step naturally comes into my test, should I make a scenario for invalid recaptcha validation?
2) Is there any clever way for me to write a scenario exploring all the possible combinations of site interest options?
In my page:
( ) Economy
( ) Education
( ) Sports
( ) Recreation
( ) Travels
( ) ...
I want to be able to submit the test several times by testing 1 option selected, 2 options, 3 options, ..., all options.
But I just want to do it if there is a lean way to do it.
In other words: In Scenario Outline examples can I pass a null value in this case?
In line with what Thomas mentioned on Captcha I would say that this is one of the few things which cannot be automated to test (except the negative path).
I also agree with Thomas that you should not want to test every single possibility of the options using executable specifications, but rather use integration testing or possible even unit testing if the architecture of this part of the code allows that.
As for an actual executable scenario in Gherkin format I see something like the following for this functionality:
Given Paul supplied the incorrect Captcha
When he wants to register himself
Then he should not be registered
You can wonder whether we should use the implementation word Captcha in the scenario since it will be incorrect when we would substitute Captcha in our implementation for something else.
There could be a potential other scenario depending on whether or not someone is allowed to register when no options are selected:
Given Paul has not chosen any of the possible interest topics
When he wants to register himself
Then he should not be registered
notice the reuse of the sentences for the when and then part to allow for less test code.
Regarding captcha, I would probably verify that a broken captcha stops the user. Verifying the positive path is obviously hard since the captcha is there to stop bots and an automated verification is the same as a bot.
Regarding verifying all your options, I would see if I could do that below the surface. Doing this from the UI using a browser is slow and you are talking about 2^10 combinations. That's a lot of cases. If all combinations needs to be tested, test them against the controller instead. This is a case where a tool like Cucumber may not be your best option. A programming language may be better than Gherkin.
If you still want to use Cucumber, at least make it fast and avoid the browser. I wrote a blogpost about the right tool for the job. It might help you to understand why you don't have to go through to UI for all scenarios.

How to implement If condition in Cucumber Scenarios

I have a specific scenario as follows:
if (element shows up on UI)
validate it
else
no harm done; move on...
If I know upfront if the element shows up or not, I can frame two different scenarios, when the element shows up and when not.
But, in this case, it may or may not be present. If it is present, it should function as expected.
Any suggestions on how this can be implemented in a Cucumber scenario(s) ?
I am using Cucumber-jvm.
You have two separate scenarios, you just need to be able to make sure you setup the preconditions to assume one scenario vs the other.
In general, you should not be implementing a conditional inside a single scenario, because your intent is to test two scenarios.

Does it ever make sense to have multiple assignees for an issue in an issue tracker?

I've been a JIRA and Bugzilla admin in past jobs, and have quite often had users ask for the ability to have more than one assignee per issue.
I know this is possible in JIRA, but to my mind it never makes sense; an issue should represent a piece of work, and only one person can do a piece of work (at least in software, I've never used an issue tracker for a 2-man bobsled team ;-)) A large piece of work will obviously involve more than one person, but I think in that case it should be split into subtasks to allow for accurate status reporting.
Does anyone have any use cases where it's valid to have multiple assignees ?
The Assignee field means many things to many people. A better name might be "Responsible User". There are three cases I discuss with my clients:
A. number of assignees = 0
JIRA has an Allow Unassigned issues option but I discourage use of that because if a work item isn't owned by anyone it tends to be ignored by everyone.
B. number of assignees = 1
The default case
C. number of assignees > 1
Who is responsible for the work item represented by the issue? The best case I've seen for this is that when an issue can be handled by any one person in a team, so before triage the issue is assigned to everyone in that team. I think a better approach is to create a JIRA user with an email address that sends to the whole team, and assign it to that user. Then a member of the team can have the issue assigned to them in particular.
Changing the one assignee case has the history recorded in the History tab. Nothing is lost in that case.
I'll often have a story / feature that can be split across multiple developers. They will have individually assigned subtasks but it would make sense to assign the parent to all involved, unless there's a lead developer. I wasn't actually aware that I could do multiple assignments, so thanks for the tip!
The other case I can think of is pair programming.
I hit upon this question while looking for solutions to doing this. Since I want to do this, I'm guessing my use case counts as an answer to your question: I only really want one assignee in the sense of someone currently working on a problem, but I want to track the whole lifecycle of an issue. For us, that can mean:
A support person receives a report from a customer, creates an issue
An issue-wrangler reviews the issue to make sure it's valid, not duplicated, has all appropriate details, etc.
A developer implements/fixes the issue
A tester performs whatever tests are appropriate (in our case, mostly extending our automated testsuite to additionally test the feature/fix)
An operations person rolls out the new version to a test environment
A support person informs the customer, who does his own tests with the new version in the test environment
An operations person rolls out the new version to production
Not all issues necessarily go through all steps. Some issues have more steps (e.g. a code review between step 3 and 4). Many issues will also move backwards among the steps (developer needs more information, we go from step 3 to 1 or 2; tester spots a problem, we go from 4 to 3).
At each stage, only one person is actually responsible for whatever's got to be done. Nevertheless, there are a whole bunch of people who are associated with the issue. Tracking systems we've used are happy to offer easy changes to previous owners of the issue (shown as a list), but I'd ideally like to go a step further, with the owner automatically reverting to the correct prior owner depending on the issue's status. At step 6, the original support person from step 1 should ideally contact the customer. At step 7, the ops person from step 5 would ideally be the assignee.
In other words, while I don't want multiple assignees for a given step, I do want there to be a "support assignee", a "developer assignee", a "testing assignee", etc.
We can do this with subtasks and we can do it by manually selecting previous owners when changing statuses, but neither is ideal and I think the situation above is one where multiple assignees would make sense.
In my company, we have a similar workflow to Nikhil. We work in a scrum model, with developers, testers and a technical writer on each team.
The workflow of a development task is
Development -> Developer review -> QA testing -> PO Acceptance -> Done
The workflow of a QA task is
QA writes test case / automated test -> QA review -> Done
We had a tool which JIRA replaced that allowed us to assign multiple people to a task, which we found very useful for our workflow. On a QA task, I could easily see if the other tester on my team had already done work and I needed to do the next step.
Without this, I am finding it difficult to quickly identify tasks written by the other tester on my scrum team which are ready for me to review (versus the ones I wrote which they need to review).
So many people have asked for the ability to have multiple assignees since at least 2007. They have varying, valid use cases. I was disappointed that the JIRA development team unilaterally said they won't implement this and would ask them to reconsider.
https://jira.atlassian.com/browse/JRA-12841
While pair-group working (pair programming etc..) it would be nice to assign both persons to the issue.
Tasks move through different steps through development (example: Development, review, testing). Different persons can be responsible for each step. Even though the task may be in review or testing, the reviewer will have stuff fore the developer to fix. Having different roles to assign to would help organizing the work.
In our team we usually develop 1 or 2 persons together.
Then the code is reviewed by around 2-5 persons in individually or in pairs
Then it is tested by 1-2 persons initially, finally tested by the whole team.
Currently our system allows us to assign a single person at a given time. That limits our ability to follow who is working on what without looking through the log for the issue. The benifits of beeing able to assign multiple persons would be good for us.
What happens if John is assigned a task and cannot finish it, and it is moved to Jane's list because John was a slacker?
Are you OK with losing history of who it was originally assigned to, and the hours that were spent / billed on it?
In an e-Learning scenario, it makes sense to have an issue assigned to multiple users.
Here is what I want to do:
I have a storyboard which I want to assign to 3 people at the same time - the animators, the recording artists and the graphic designers. Once these people finish their tasks, they will pass it on to a common reviewer, who will review and close the issue.
Graphically it would look something like this:
Storyboard
/ | \
graphics animator recording
\ | /
reviewer
|
done
The three job roles depend only on one storyboard. The compilation of the three have to go to a reviewer. I'm racking my brains to get this working on redmine. Haven't found a solution yet.
Got this answer from an Atlassian partner https://www.isostech.com/solutions/
and then later from Atlassian
Objective:
Want to set who does the works for each step on an issue
Summary:
Use a plugin to copy values from custom fields into the assignee field whenever the issue transitions to a new step.
How:
1. Install the Suite Utilities plug-in:
This plug-in adds a bunch of new functionalities to workflows.
You will use the plug-in to copy the value of a custom field to the assignee:
Create a custom field as single user picker for each role i.e., dev, tester, reviewer to be assigned at different steps in the issue
Add these fields to the issue type's screen
Modify the post-function on the workflow transition between each step
Add a "Copy Value From Other Field" post function and set it to copy the value from the appropriate user custom field into the assignee field.

Mixing Then and When in BDD User Stories/Acceptance Tests

How do you handle user stories/acceptance tests that have long chains like this one, where the Then/When mingle together? Is it best to split this into a separate acceptance test where one tests that the dialog appears and then the second one tests the behavior after the dialog has been shown?
Feature: Confirmation before removing products from cart
In order to avoid accidentally removing an item from my cart
As a Customer
I want a confirmation dialog to ask me if I'm sure I want to remove an item
Scenario: I want to remove an item from my cart
Given I have added item "xyz" to my cart
When I click "Remove"
Then a confirmation dialog pops up
And it asks "Are you sure you want to remove this from your cart"
When I click "Yes"
Then item "xyz" should be removed from my cart
Your scenario seems a little long, and it's quite heavily tied to the gui. What would happen if you tied it to the capabilities of the system instead?
Scenario: I want to remove an item from my cart
Given I have a cart containing "xyz"
When I remove "xyz" from my cart
Then my cart should be empty.
The scenario now describes stuff that's useful to the user, and it's easier to refactor.
I love BDD as much as I do because I had a situation much like this. We had 120 acceptance tests and they were mostly failing. Someone had put a confirmation dialogue box in much like the one you describe, and instantly broke over 80 acceptance tests. By turning them into scenarios with high-level, reusable steps instead, we can easily refactor and keep the tests working even if the mechanisms we use to implement the capabilities of the system change. The actual clicking of buttons happens within those reusable steps, and it's OK to have more than one UI action per step.
I wrote a scenario here which does this if it's useful (it's a DSL rather than English but you should get the idea):
http://code.google.com/p/wipflash/source/browse/Example.PetShop.Scenarios/PetRegistrationAndPurchase.cs
The question is really one of what the "branches" are.
If there are multiple steps there must be user choices at each step. There should be multiple "When"'s. This should form a rich tree with lots of user-selected alternatives at each branch. Each possible outcome should have it's own test to make the various choices and arrive at that outcome.
A three step sequence with two user choices is 8 possible paths. Different paths may arrive at the same outcome (or may not). But you should have multiple paths through this.
If it's just sequential (because someone felt like writing sequential steps) and the user has no choices, then it's not really driven by consideration of the user's behavior, is it?
I don't see the choices. No choices == bad smell. But easy to test since there's only one outcome with a sequence of captive steps where the user has few or no choices.
If you work out the choices properly, then each step has multiple outcomes and each step should be tested independently.

Resources