We have 30 000 Scenarios and trying to use cucumber with maven(No test NG), and really big project. Dependent scenarios will stop us to pick only one part of Test Suit or one Test Case from Manual Test Plan on the other hand Independent scenarios will significantly increase time for test execution (if you start regression than it is almost waist of time).
Is answerer something between?
e.g.
Use independent where can, and divide dependent into functionalities and put them into separate feature files which are based on functionalities?
What is the best practice for writing feature files for big projects?
Dependent VS Independent
Functional Feature Files vs US feature files.
That is question for you and your team, you have to get together and decide what is the best solution for you guys.
Someone here can give you his point of view, but you guys know what is best for your project needs.
In general, it is not a good idea to have dependent tests, as they are harder to maintain and if dependency breaks then all of your tests will fail and produce false negatives. Also if your important factor is time when doing automated regression testing then perhaps find middle ground, and your solution will be there.
Related
I'm new to cucumber as a testing suite. I notice that as I build out feature and write steps. Lets say as a bad example (since I'm working backwards) I write a bunch of stuff for creating posts that require a User.
I end up writing a bunch of User based steps (log in process etc) in a feature set mainly dedicated to Post features.
Is it best practice to later move steps into the appropriate feature set as tests get more complicated and features get added?
You have two parts to consider here.
Organize scenarios so they make sense. That is to place them in the proper feature files.
Organize the implementation of the steps so they make sense. That is, implement the steps in the proper source code files.
Your question boils down to "What makes sense in my context?".
It depends on your stakeholders, do they want all user facing scenarios in the same feature file or are they more interested in business facing scenarios that sometimes involve users? Organize the scenarios so your stakeholders are happy.
How should you organize the steps then? It depends on your developers and your ability to share state between step definitions that are implemented in different source code files.
My approach would probably be to start out small and let the suite grow. This would initially not involve sharing state between different classes during runtime. When the suite feels to large to handle, divide it in two parts that are as coherent as you can make them. When this gets to large, repeat the division again. You will, hopefully, end up with something that works well in your context.
Remember that your context and your product is unique. It probably deserves a unique solution that your team feel they can maintain.
Understandability and therefore manintainability is the most important property I can think of regarding the executable specification you are building.
I have a Cucumber feature file with over 66 scenarios! The title of the feature file does represent what the scenarios are all about.
But 66 (200 steps) feels like quite a large number. Does this suggest that my feature title is too broad?
What is the maximum number of scenarios one should have in a single feature file (from a best practice point of view)?
Thanks in advance :)
Although I don't know your system and feature file, I can surely say that there is a misunderstanding of scenarios and their purpose.
The purpose of scenarios is to bring a clarification for the feature by examples. Usually, people tend to write scenarios to cover all use cases. If you do scenarios that way, the feature loses the ability to be human-readable.
Keep in mind that acceptance tests are expensive to write and expensive to change. Write the minimum scenarios. If there is a scenario that doesn't bring any additional value for the understanding of the feature, then that scenario shouldn't be there. Move all use cases into a lower level of testing - unit tests.
In most cases, the feature has the number of scenarios in units, or tens if it's a complex feature.
Edit: If the number of scenarios would go close to 10, I would rather split the feature file into more files describing deeper part of the feature.
Yes, 200 is an unusually large number of scenarios for a single file. It is likely to be hard to find a particular scenario in the file or to keep it organized. (Multiple smaller files are easier to organize; a directory of files is easier for people to understand and maintain than a long file with comments or worse yet some uncommented ordering scheme.) It will also take a long time to run the file, which will make development difficult.
More importantly, 200 scenarios for a single feature might mean that the feature is extremely complex or that it is very broad. In either case it can probably be broken up into multiple smaller feature files. It also might mean that there are too many scenarios. There might be a scenario for every value of some variable (it might be sufficient to write a single scenario and not worry about different values) or a scenario for every detail of every feature (it might be better to write unit tests, which are smaller and more focused and faster, for details).
But, as with any software metric about the size of a piece of code, there might be a typical size, but every problem is different. Your feature might really be that complex. We can't say without understanding the domain and seeing the feature file.
I am now working on a project where we are using cucumber-jvm to drive acceptance tests.
On previous projects I would create internal DSLs in groovy or scala to drive acceptance tests. These DSLs would be fairly simple to use such that even a non-techie would be able to write tests with a little bit of guidance.
What I see is that BDD adds another layer of indirection and semantic sugar to the tests, but I fail to see the value-add, especially if the non-techies can use an internal DSL.
In the case of cucumber, stepDefs seem to scatter the code that drives any given test over several different classes, making the test code difficult to read and debug outside the feature file. On the other hand putting all the code pertaining to one test in a single stepDef class discourages re-use of stepsDefs. Both outcomes are undesirable, leaving me asking what is the use of natural language worth all this extra, and unintuitive indirection?
Is there something I am missing? Like a subtle philosophical difference between ATDD and BDD? Does the former imply imperative testing whereas the latter implies declarative testing? Do these aesthetic differences have intrinsic value?
So I am left asking what is the value add to justify the deterioration in the readability of the actual code that drives the test. Is this BDD stuff actually worth the pain? Is the value add more than just aesthetic?
I would be grateful if someone out there could come up with a compelling argument as to why the gain of BDD surpasses the pain of BDD?
What I see is that BDD adds another layer of indirection and semantic sugar to the tests, but I fail to see the value-add, especially if the non-techies can use an internal DSL.
The extra layer is the plain language .feature file and at the point of creation it has nothing to do with testing, it has to do with creating the requirements of the system using a technique called specification by example to create well defined stories. When written properly in the business language, specification by example are very powerful at creating a shared understanding. This exercise alone can both reduce the amount of rework and can find defects before development starts. This exercise is otherwise known as deliberate discovery.
Once you have a shared understanding and agreement on the specifications, you enter development and make those specifications executable. Here is where you would use ATDD. So BDD and ATDD are not comparable, they are complimentary. As part of ATDD, you drive the development of the system using the behaviour that has been defined by way of example in the story. the nice thing you have as a developer is a formal format that contains preconditions, events, and postconditions that you can automate.
Here on, the automated running of the executable specifications on a CI system will reduce regression and provide you with all the benefits you get from any other automated testing technique.
These really interesting thing is that the executable specification files are long-lived and evolve over time and as you add/change behaviour to your system. Unlike most Agile methodologies where user stories are throw-away after they have been developed, here you have a living documentation of your system, that is also the specifications, that is also the automated test.
Let's now run through a healthy BDD-enabled delivery process (this is not the only way, but it is the way we like to work):
Deliberate Discovery session.
Output = agreed specifications delta
ATDD to drive development
Output = actualizing code, automated tests
Continuous Integration
Output = report with screenshots is browsable documentation of the system
Automated Deployment
Output = working software being consumed
Measure & Learn
Output = New ideas and feedback to feed the next deliberate discover session
So BDD can really help you in the missing piece of most delivery systems, the specifications part. This is typically undisciplined and freeform, and is left up to a few individuals to hold together. This is how BDD is an Agile methodology and not just a testing technique.
With that in mind, let me address some of your other questions.
In the case of cucumber, stepDefs seem to scatter the code that drives any given test over several different classes, making the test code difficult to read and debug outside the feature file. On the other hand putting all the code pertaining to one test in a single stepDef class discourages re-use of stepsDefs. Both outcomes are undesirable, leaving me asking what is the use of natural language worth all this extra, and unintuitive indirection?
If you make the stepDefs a super thin layer on top of your automation testing codebase, then it's easy to reuse the automation code from multiple steps. In the test codebase, you should utilize techniques and principles such as the testing pyramid and the shallow depth of test to ensure you have a robust and fast test automation layer. What's also interesting about this separation is that it allows you to ruse the code between your stepDefs and your unit/integration tests.
Is there something I am missing? Like a subtle philosophical difference between ATDD and BDD? Does the former imply imperative testing whereas the latter implies declarative testing? Do these aesthetic differences have intrinsic value?
As mentioned above, ATDD and BDD are complimentary and not comparable. On the point of imperative/declarative, specification by example as a technique is very specific. When you are performing the deliberate discovery phase, you always as the question "can you give me an example". In that example, you would use exact values. If there are two values that can be used in the preconditions (Given) or event (When) steps, and they have different outcomes (Then step), it means you have two different scenarios. If the have the same outcome, it's likely the same scenario. Therefore as part of the BDD practice, the steps need to be declarative as to gain the benefits of deliberate discovery.
So I am left asking what is the value add to justify the deterioration in the readability of the actual code that drives the test. Is this BDD stuff actually worth the pain? Is the value add more than just aesthetic?
It's worth it if you are working in a team where you want to solve the problem of miscommunication. One of the reasons people fail with BDD is because the writing and automation of features is lefts to the developers and the QA's, and the artifacts are no longer coherent as living specifications, they are just test scripts.
Test scripts tell you how a system does a particular thing but it does not tell you why.
I would be grateful if someone out there could come up with a compelling argument as to why the gain of BDD surpasses the pain of BDD?
It's about using the right tool for the right job. Using Cucumber for writing unit tests or automated test scripts is like using a hammer to put a screw into wood. It might work, but it's never pretty and it's always painful!
On the subject of tools, your typical business analyst / product owner is not going to have the knowledge needed to peek into your source control and work with you on adding / modifying specs. We created a commercial tool to fix this problem by allowing your whole team to collaborate over specifications in the cloud and stays in sync (realtime) with your repository. Check out Simian.
I have also answered a question about BDD here that may be of interest to you that focuses more on development:
Should TDD and BDD be used in conjunction?
Cucumber and Selenium are two popular technologies. Most of the organizations use Selenium for functional testing. These organizations which are using Selenium want to integrate Cucumber with selenium as Cucumber makes it easy to read and to understand the application flow. Cucumber tool is based on the Behavior Driven Development framework that acts as the bridge between the following people:
Software Engineer and Business Analyst.
Manual Tester and Automation Tester.
Manual Tester and Developers.
Cucumber also benefits the client to understand the application code as it uses Gherkin language which is in Plain Text. Anyone in the organization can understand the behavior of the software. The syntax's of Gherkin is in the simple text which is readable and understandable.
I am looking for a stable, free, easy to use, tool for generating decision table. TestCaseGenerator is exactly what I'm looking for, but is far from being stable, and if I have thousands of test cases it stops generating the test case. DecisionTableCreator is another example, but is not working if you have too many conditions.
I spent long time searching for such tool which I am sure must exist (I don't think TDD can do without such tool).
10x,
Sharon
The decision-table code generator, CCIDE ( http://twysf.users.sourceforge.net/ ) might be what you're looking for.
TestCaseGenerator is exactly what I'm looking for, but is far from being stable, > and if I have thousands of test cases it stops generating the test case.
How much cases are you need? I'm using http://decision-table.com - it could generate 16300 cases on my low-end computer. I guess more RAM could give you more cases.
By the way, why it is need so much test cases? I mean, 15+ conditions could possibly be splitted in two/tree/four test suits, so your decision tables will be smaller. May be it'll be helpfull to look at pairwise testing - technique of reducing number of test cases without leveraging decreasing of test coverage
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
In my workplace, we "follow" the agile methodology. However, all we do is standups. How else do I have to change my way of working as a developer, to follow agile?
Thanks
Agile is really a group of software development methodologies based on iterative development, where requirements and solutions evolve through collaboration between self-organizing cross-functional teams. It's hard to do by yourself.
That said, there are things you can do that will make you more agile, and that your teammates may choose to emulate once they see the advantages:
Work in small pieces. You want to break your tasks down into pieces that can be completed in a reasonable amount of time. (The teams I've worked on usually measured things in half-day units. Thus you could complete 2 units of work a day, and 10 units in a week.)
Commit functioning code. When you're working, you want to commit your code frequently, but only when the code compiles, and works without breaking your unit tests. You do not want to be the person who commits code that breaks a build.
Write Unit Tests. Your team IS writing unit tests for its code, right? If not, then start now. Writing unit tests will force you to structure your code to be testable, which will also force you to improve your implementation and design. It will also detect regression errors, by checking everything that used to work when someone makes a change.
Unit Tests for all bugs. Any time you need to fix a bug, first write a unit test that causes your code to fail in the same manner as the bug. Then fix your code. If the fix is good, your unit test should now pass -- and all of the rest of your unit tests should continue to pass.
Unit Tests for all new code. When you're building new code, you should be building to a spec. One of the best ways to ensure that the spec is good, is to use the spec to write unit tests for your code. Once you've got enough tests to validate the code you intend to write, go to work, testing your code against your tests. Once your code passes the tests, you can commit to the team repository.
Use Continuous Integration. This is something that the team itself should be doing, but if you can get the use of an extra PC (it doesn't have to be fast, just have enough memory and disk space to build your tools and build your software). Load CruiseControl.net or Hudson on it, point it at your repository, and configure it to wait for new commits, checkout your workspace, build your software, and run your unit tests. Why? Because it will catch when someone has neglected to commit all of the pieces of their change, before the change propagates to the whole team.
Automate your builds. Before you can use Continuous Integration you need to be able to build your software repeatedly without human intervention. If you're using Visual Studio, learn how to build using MSBuild or Nant. If you're doing Java, learn how to build with Ant or Maven. By building automatically, you avoid build and release problems associated with manual steps. (I once reduced the build process for a project from a notebook that took 2 professionals a week to complete, to a set of scripts that would take about an hour to run -- you better believe that improved the quality of releases.)
That sounds like waterfall with daily meetings. Implementing agile is quite a vast difference and you can't just change from waterfall to agile yourself, you need others to follow suit for it to work.
I think the largest change will be to stop thinking in "project" scope, and start thinking in very small increments of work. For example, when the project "Create website X" comes up, you'll need to break that down on a page by page basis. Determine what needs to be done, how exactly are we fetching, storing, updating, displaying the data. How long will it take to write the different pieces of code required to do that? Once that is laid out (there is much more planning involved in agile, from my experience) then you can start saying "By Wednesday, I'll be able to show you guys that I can save on page X and I'll display the data on page Y".
Usually there is a "planning" meeting. This can take an hour or it can take 6, this depends on how well your criteria is conveyed, how many members are on the team, and how long of a sprint you're working with. Everyone selects work that they will do, and puts estimates on it. After your sprint (which most people recommend to be one or two weeks) there is another meeting. Ideally in this meeting everyone will demo what they have been doing the past week(s), and it will work perfectly. Afterward there is some reflection, what worked well? Did we mis-estimate something terribly?
That is one "cycle", do that ~50 times and website X is complete! :)
To start with, there is no such thing like "the agile methodology", agile is an umbrella term that describes several agile methodologies and if all your workplace is doing is standups, I can already tell you that this doesn't make a workplace agile.
Second, while you can adopt some "agile practices" (especially engineering practices) at an individual level, this will never be enough to make you agile: 1. agile is in my opinion more about the way to drive product development than engineering practices 2. agile is a collective team game.
So, my recommendation would be to dive into for example Scrum and XP from the Trenches and to grab some copies for your coworkers, your boss or potential sponsors.
Congratulations on doing stand-ups. It's a good first change.
That you're asking suggests that you or the team would like to be better at this. In that case, you can go one of two ways:
Huge change, or
Incremental improvement
If you decide you'd like a huge change, you'll probably need some books, training and maybe a coach or experienced practitioner around. This is often successful if people higher up in the organisation are invested in the change too.
If you decide you'd like to improve incrementally, it's worth reading around Agile just to get some ideas. I recommend "XP Explained". There are a lot of blogs out there, too, as well as posts here. The two things you'll need to do are:
Try to deliver some software, or at least get feedback from the stakeholders
Work out why that was hard and what you can do to make it easier.
We normally do the first with showcases and the second with retrospectives. I recommend having retrospectives at least every two weeks, even if it's really hard to showcase working code.
Things I often see flagged up quickly as problems include:
Team not co-located ("Team" includes BAs and QAs)
Environment not suited
Lack of visibility of work in progress or overall goals
Too much work in progress - things started but not finished
Projects in progress that nobody really cares about
Project progress makes it obvious that it's not worth doing
Codebase is really hard to change
Blame culture discourages collaboration.
Whatever you find out, you won't be the first.
Note that Agile is a transparent methodology, whichever version you use. A lot of people get scared by transparency. This is normal. Sometimes managers higher up have a vested interest in not allowing things to be transparent. This is also common, and at that point you might need external help. Delivering working software can be very persuasive, though.
Good luck!
If you want do to this from the ground up, then all you need is the agile manifesto and recurring retrospectives each week. But I guess that is not enough, so here is my start-up-list:
Convert your existing project tasks/points/todos into User Stories
Pair program on everything. Switch pairs often!
Use Test Driven Development. Strive for 100% coverage!
Use one week iterations. Repetition is learning!
Deliver valuable software to the customer in each iteration.
Even if entire team doesn't work in an agile way there are few practices that you can adopt as Developer. You can begin with CI, TDD, automated deploy. As a team you can try out retrospective session.