Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
A general question regarding the testing.
Should the test cases be written without the steps? My lead test cases are written assuming you know all requirements and system. So no need to write the steps because as a QA person, you know the steps to test the requirement. And for executing a test case, you can go through the BRD/SRS again.
Won't this be a double effort?
Looking the requirement again in BRD which is there in 2-3 pages non-consecutively.
Not sufficient for any new tester.
Tester can forget the steps needed to test a requirement
Advantages of writing steps:
Don't have to look the BRD again.
Proper test cases with steps can be used by any tester.
Proper coverage.
So steps are required for preparing proper test cases? Are there any standards/rules of thumb for writing test cases at the original level?
Well, if you are designing the test cases, you should have the steps. As one of the tester in the team, you may not be the only person to cover all the test cases in a product. And any tester may test any module of the product. So, for a tester who is not familiar with the module for which you have written the cases, it might be very difficult if there is no STEPS.
Writing Test Cases saves a lot of time during the Regression Testing and Retesting. And it is difficult for the tester to remember all the test cases, if the project is of long term.
Test cases are all about steps! Each test plan should have detail description of the environment in which the test cases have to be run/executed and each test case should have detailed steps!
This way nothing is ambiguous and when people working on project change, there are no question marks left.
No matter what your seniors say, please include all detailed steps and environment details in the test plan (and test cases) so that nothing is assumed ever!
No, you should not write the test cases without the steps because:
It's an essential part of the test case. Without it, you can't understand the test cases.
When you hand over the tests, it will be easier for the other person to execute those test cases.
If an issue occurs in the future and your PM asks what steps did you perform, the steps will be proof that you tested the features.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 21 days ago.
Improve this question
On my team we have about 5 developers and 3 QA testers.
Our sprints are 10 day sprints but our work as developers is due on the 6th day so that the QA testers can have 3 days to test our completed work before our biweekly release.
I feel like the system is very inefficient and really limits the work we can do as developers since we only have 6 days of development followed by a few days of thumb twiddling since there aren't any more user stories groomed yet.
How does everyone else do it?
Please keep in mind that what works in one team will not necessarily work in another.
(1) It's a valid question: there are so many companies with a separate QA department (to be integrated more or less quickly) or the role of sole testers within an agile team. And most of the time, the roles are brought closer together (check!) but the core idea of agility is not pursued further. How then is collaboration supposed to succeed efficiently?
(2) Most answers are valid, too: there is no golden road. You should do what increases the performance of the team. If it helps the team to split tasks into 4-hour units, then do that. If it helps to have the QA people write tests in advance: do it!
In my opinion, transparency and good communication are key. Get people together. Ask the team (within the retrospective if you like):
What is holding them back to continuously integrate code junks? (Is it the waterfall-like progress within the sprint?)
How can they deal with it?
As long as you have dependencies that only individuals (testers, writers, etc.) can do, you won't get out of the situation where someone always has to wait. So maybe it's an option for QA to define and maybe even write tests in advance. Furthermore, the developers can be authorized to perform releases independently based on these rules defined by QA. Of course, the suggested option is not feasible in all areas. It is the people who know the constraints and find solutions to solve them.
Some of the things you might try:
Break stories down as small as possible
Use stubbs and mocks to make features available to start test preparation sooner
Use a test-first approach and write automated tests before the development work starts (both the QAs and the developers can write the automated tests)
The developers' tasks should be granular enough to be completed by the developers in around 4 hours, if possible. This helps the developers complete around 2 tasks every day, and starting on day 1, the QAs will be able to start testing.
You can change the numbers according to your dynamics, but generally, granular tasks help async work and utilization.
The way you describe the dev/qa split is 'waterfall' in 2 week spurts!
One of the Agile Frameworks (DSDM) has a 'Testing Practice' of 'Testing is integrated throughout the lifecycle'.
This means Stories/PBI are tested as they have been developed not all 'saved-up' until the end of the Sprint!
Nezih TINAS answer about PBI size can be used for tasks within a Story IMHO, but I prefer end-to-end PBI that take 3 to 4 days by the developers with comprehensive Acceptance Tests; more than 4 or 5 AC usually means splitting the PBI, still end-to-end, not by front-end/back-end.
While your devs are working for 6 days, what are your 3 QA guys doing?
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm very beginner for unit testing in node.js, I want to know what is the best practice of writing unit testing in node.js for example 'it' method how many assert test cases I can have, Is there any standard of writing only one test case in single it method. Please give me an idea to write the unit test case.
Thanks in advance.:)
Test one part of functionality in one it() call and only use multiple assertions if really needed.
If you use 2 assertions in one it() call, failure of the first one will block the second one from being executed, thus hiding part of your tests and therefore preventing you from getting a full view on a possible error.
Study how to use before/after and beforeEach/afterEach inside a describe block - those will really help you to only perform tests on small parts of your code in every it(). See the 'Hooks' chapter in the mocha documentation.
Optionally create your own set of helper functions to prepare set up your code for a single test to prevent (too much) code duplication in your tests - I believe code duplication in tests is just as bad as code duplication in your 'real' code.
This free tutorial explains Chai and Mocha quite well, and how to structure it.
While Mocha is a regular test framework, Chai is an expectation framework. The key difference is syntactically sugary how tests are formulated (the use of it() for test cases), which I personally find confusing, too.
For a starter, you should probably stick with mocha. It might help you to get some wording straight:
Mocha is a test framework (so you have a defined outer set of functionality, in which to fill in the gaps, aka place your tests, etc), whereas
Unit.js is a test library, so it offers a bunch of functions (like all kind of asserts), but you are driving your script. (No test suites, test rnning)
The mocha.js framework uses the unit.js test functions (see here).
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Working on a large and complex application, I wonder where and whether we should be storing scenarios to document how the software works.
When we discuss problems with an existing feature, it's hard to see what we have already done and it would be hard to look back with a scrum tool such as TFS. We have some tests but these are not visible to the product owner. Should we be looking to pull out some vast story / scenario list, amending / updating as we go or is this not agile.
We have no record of how the software works other than the code, some unit tests,some test cases and a few out of date user guides.
We tend to use our automated acceptance tests to document this. As we work on a user story we also develop automated tests and this is part of our Definition of Done.
We use SpecFlow for the tests and these are written as Given, When, Then scenarios that are easy to read and understand and can be shared with the product owners.
These tests add a lot of value as they are our automated regression suite so regression testing is quicker and easier, but as they are constantly kept up to date as we develop new stories they also act as documentation of how the system works.
You might find it useful to have a look at a few blogs around Specification by Example which is essentially what we are trying to do.
A few links I found useful in the past are:
http://www.thoughtworks.com/insights/blog/specification-example
http://martinfowler.com/bliki/SpecificationByExample.html
Apart from the tests we used also a Wiki for documentation. Especially the REST API was documented with request/response examples but also other software behaviour (results from long discussions, difficult to remember stuff).
Since you want to be able to match a description of what you've done to the running software, then it sounds like you should put that in version control along with the software. Start with a docs/ directory, then add detail as you need it. I do this frequently, and it just works. If you want to make this web-servable, then set up a web server somewhere to check out the docs every so often and point the document root at the working copy docs/ directory.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
My question is really very important .
When i program i have seen that i had lots of errors in programming logic + structure + a flexible when it goes for testing , i have read many books on OOPS and my all the concept are clear but i do not know where to start design of my code or project . can any body help me how to improve this part of programming skill.
although i work on php+javascript but this question is for all the programmers on stackoverflow
note- usually when i hold paper and pen i think where to start from ..
if i make something problem is how to simplyfy.... and many others which u all are facing / faced
Well, I think everyone is different as is every project. But here is what I personally do...
For my own projects, i.e. no client requirements, I start at one end or the other, either with the database structure or the UI. I then work down through the layers making sure that I maintain clear separation of concerns to make testing (unit and system) as well as maintenance as easy as possible.
One thing to note is that regardless of your approach I think the process is iterative. I will often work, refactor, work, refactor etc so don't get too bogged down with the details and feel you have to stick to them. The requirements are the key thing (whether for yourself or for a client), the technical implementation is largely irrespective.
When dealing with clients the process is somewhat different. You will need to do a fair amount of design up front so again think from one layer to the next trying to keep as much of the logic in the correct layers as possible. As an example you have your DB, then you want a data access layer (DAL) to abstract your code from the DB access. Then you want specific business logic libraries which use the DAL, this abstracts the higher portions of code from the data (they go through the business layer) etc etc.
Just think of each level and try and keep it as generic as possible, that way when you wish to change the storage for the data, you simply change the DAL and everything else works as before...
As far as starting a design of your project is concerned, whole lot depends on what you are developing, that is requirement of the application. So first thing is that you must collect information about the purpose of your application. And when we start to program, a plain trend must be kept in mind, which is, as a universal fact of programming, Input-Process-Output. So, design starts with input. Just collect as much information as you can about what will be required as your input of application. If the input is not made by your user, than it is not required to be mentioned in your front-end design (In Windows language, the so called "Form"). What user will give, is matter of concern in designing the input area (very first step to start project).
During the designing phase, constant interaction is required with user to make effective & flexible start design, as ultimately he/she is going to use. If I'm starting a project's designing, I always consider the user a lazy person, if we keep that thing in mind, our application will be simpler & easy to use. Once, you'll kick-off the start, its just a flow, that'll suggest you next step.
Hope this helps............. :-)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm interested in evaluating bug trackers, but I wanted to back up and figure out what sorts of criteria were most important in bug software. So far things I've thought of include:
integration with source control
usability
basic features (email notifications, rss, case states)
customization
advanced features (reporting, visualizations)
stability
cost
IDE integration
Any ideas?
Ease of use
This should, in my opinion, be on the top of your list of features to evaluate against. You want inhouse developers and testers to take any and all things they notice in the software and plug it into the tool, even if they're currently working on something else. For this to happen, the tool must be so easy to use that it stays out of the way and just takes your data. The worst bugs are those you don't know about.
A tool that has 15+ fields on the screen, where 10+ are required in order to just be able to submit the issue, is not such a system. With such a system, you'll get postit notes from testers to developers about the little things.
When evaluating BugTracker X, which bugtracker do the developers of BugTracker X use?
customizable workflows (from "open" to "in work" to "resolved" to "closed")
fine granular access control
There was a recent thread on Hacker News about this exact question. Lots of good stuff in there!
An API. Mandatory.
You MUST be able to catch and automatically submit bugs into your bug tracker from applications running in the field.
(Copy/Pasted from "Lasse V. Karlsen"'s answer)
You want inhouse developers and testers to take any and all things they notice in the software and plug it into the tool, even if they're currently working on something else. For this to happen, the tool must be so easy to use that it stays out of the way and just takes your data. The worst bugs are those you don't know about.
Even good, conscientious testers, if they are focused on testing component A but happened to stumble on a bug in component B, might not actually enter that bug if there is a lot of friction in the bug tracker. Friction means, required fields. It's not that the testers are bad or lazy - it's just how the human mind works. We focus. We don't see the guy in the gorilla suit.
The Joel/FogBugz philosophy of NO required fields is the right one (Also the philosophy of my own BugTracker.NET). You almost always can gather the details later - what os, what version, what browser, etc.
Also, take a look at "Bug Shooting", if your app has a GUI. You want to make it as easy as possible for the testers to take a screenshot and get it into the bug tracker, and that's a great tool for it. Pick a tracker that works with Bug Shooting or has its own dedicated screen shot tool.
Distribution. My version control system is distributed, why shouldn't my bugtracker? If I fix a bug on the train, why should I be able to make the fix but not record it?
Probably everything mentioned by others, plus some from me.
If you have long term big project, separate testing team that will do functional tests, you should take few additional things into consideration:
- can bugs be linked to test cases (and more precisely to given run)?
- can defect tracking system exchange data with test management system?
- can it produce (useful) reports?
- can bugs be grouped by release?