Maximum/Optimal number of Cucumber test scenarios in one file - cucumber

I have a Cucumber feature file with over 66 scenarios! The title of the feature file does represent what the scenarios are all about.
But 66 (200 steps) feels like quite a large number. Does this suggest that my feature title is too broad?
What is the maximum number of scenarios one should have in a single feature file (from a best practice point of view)?
Thanks in advance :)

Although I don't know your system and feature file, I can surely say that there is a misunderstanding of scenarios and their purpose.
The purpose of scenarios is to bring a clarification for the feature by examples. Usually, people tend to write scenarios to cover all use cases. If you do scenarios that way, the feature loses the ability to be human-readable.
Keep in mind that acceptance tests are expensive to write and expensive to change. Write the minimum scenarios. If there is a scenario that doesn't bring any additional value for the understanding of the feature, then that scenario shouldn't be there. Move all use cases into a lower level of testing - unit tests.
In most cases, the feature has the number of scenarios in units, or tens if it's a complex feature.
Edit: If the number of scenarios would go close to 10, I would rather split the feature file into more files describing deeper part of the feature.

Yes, 200 is an unusually large number of scenarios for a single file. It is likely to be hard to find a particular scenario in the file or to keep it organized. (Multiple smaller files are easier to organize; a directory of files is easier for people to understand and maintain than a long file with comments or worse yet some uncommented ordering scheme.) It will also take a long time to run the file, which will make development difficult.
More importantly, 200 scenarios for a single feature might mean that the feature is extremely complex or that it is very broad. In either case it can probably be broken up into multiple smaller feature files. It also might mean that there are too many scenarios. There might be a scenario for every value of some variable (it might be sufficient to write a single scenario and not worry about different values) or a scenario for every detail of every feature (it might be better to write unit tests, which are smaller and more focused and faster, for details).
But, as with any software metric about the size of a piece of code, there might be a typical size, but every problem is different. Your feature might really be that complex. We can't say without understanding the domain and seeing the feature file.

Related

Independent vs Dependent Scenarios in feature files (Cucumber Java-Maven)?

We have 30 000 Scenarios and trying to use cucumber with maven(No test NG), and really big project. Dependent scenarios will stop us to pick only one part of Test Suit or one Test Case from Manual Test Plan on the other hand Independent scenarios will significantly increase time for test execution (if you start regression than it is almost waist of time).
Is answerer something between?
e.g.
Use independent where can, and divide dependent into functionalities and put them into separate feature files which are based on functionalities?
What is the best practice for writing feature files for big projects?
Dependent VS Independent
Functional Feature Files vs US feature files.
That is question for you and your team, you have to get together and decide what is the best solution for you guys.
Someone here can give you his point of view, but you guys know what is best for your project needs.
In general, it is not a good idea to have dependent tests, as they are harder to maintain and if dependency breaks then all of your tests will fail and produce false negatives. Also if your important factor is time when doing automated regression testing then perhaps find middle ground, and your solution will be there.

Can I reuse a cucumber/gherkin Example block?

I've got two different scenarios that use the same example block. I need to run the example block for two different times of the day and I'm looking for a succinct way to do this (without copy+pasting my example block).
I'm replacing the yyymmdd with an actual date in my stepdef.
I'd like to reuse my Example block because in real life it's a MUCH longer list.
Scenario Outline: File arrives in the morning
Given a file <file> arrives in the morning
When our app runs
Then The file should be moved to <newFile>
And the date should be today
Examples:
|Filename|NewFilename|
|FileA|NewFileA_yyyymmdd|
|FileB|NewFileB_yyyymmdd|
Scenario Outline: File arrives in the evening
Given a file <file> arrives in the evening
When our app runs
Then The file should be moved to <newFile>
And the date should be tomorrow
Examples:
|Filename|NewFilename|
|FileA|NewFileA_yyyymmdd|
|FileB|NewFileB_yyyymmdd|
I'm implementing this in java, though I don't know if that's a relevant detail.
No, this is not supported in the Gherkin syntax. I don't often advised copy-and-paste, but this is one case where it is warranted due to a missing feature of the language.
Generally this should not be a big deal, as the example size should be small. It you really need a large number of examples then recreating this test in code only (Java, Python, C#, etc.) might be the best idea. Most unit test libraries offer some form of data driven tests that might provide a DRYer, more maintainable solution than Gherkin.
This is something thats better tested at a lower level. What you are testing here is your file renaming algorithm. You could write a unit test to do this which would
run much much faster (100 1000 or even 10K times faster is perfectly realistic)
be much more expressive
deal with edge cases better
Once you have that done I would write a single scenario that deals with the whole end to end process, and just ensures that the file is moved and renamed e.g.
Given a file has arrived
When our app runs
Then the file should be moved
And it should be renamed
And the new name should contain the current date
Cukes are expensive to create and particularly to run, so you need to get lots of functionality exercised for each one. When you use outlines and create identical scenarios you are just wasting loads of runtime and adding complexity for little benefit.

Cucumber: Best practice for writing cucumber steps that are shared among different feature sets?

I'm new to cucumber as a testing suite. I notice that as I build out feature and write steps. Lets say as a bad example (since I'm working backwards) I write a bunch of stuff for creating posts that require a User.
I end up writing a bunch of User based steps (log in process etc) in a feature set mainly dedicated to Post features.
Is it best practice to later move steps into the appropriate feature set as tests get more complicated and features get added?
You have two parts to consider here.
Organize scenarios so they make sense. That is to place them in the proper feature files.
Organize the implementation of the steps so they make sense. That is, implement the steps in the proper source code files.
Your question boils down to "What makes sense in my context?".
It depends on your stakeholders, do they want all user facing scenarios in the same feature file or are they more interested in business facing scenarios that sometimes involve users? Organize the scenarios so your stakeholders are happy.
How should you organize the steps then? It depends on your developers and your ability to share state between step definitions that are implemented in different source code files.
My approach would probably be to start out small and let the suite grow. This would initially not involve sharing state between different classes during runtime. When the suite feels to large to handle, divide it in two parts that are as coherent as you can make them. When this gets to large, repeat the division again. You will, hopefully, end up with something that works well in your context.
Remember that your context and your product is unique. It probably deserves a unique solution that your team feel they can maintain.
Understandability and therefore manintainability is the most important property I can think of regarding the executable specification you are building.

Automatic Decision Table generator

I am looking for a stable, free, easy to use, tool for generating decision table. TestCaseGenerator is exactly what I'm looking for, but is far from being stable, and if I have thousands of test cases it stops generating the test case. DecisionTableCreator is another example, but is not working if you have too many conditions.
I spent long time searching for such tool which I am sure must exist (I don't think TDD can do without such tool).
10x,
Sharon
The decision-table code generator, CCIDE ( http://twysf.users.sourceforge.net/ ) might be what you're looking for.
TestCaseGenerator is exactly what I'm looking for, but is far from being stable, > and if I have thousands of test cases it stops generating the test case.
How much cases are you need? I'm using http://decision-table.com - it could generate 16300 cases on my low-end computer. I guess more RAM could give you more cases.
By the way, why it is need so much test cases? I mean, 15+ conditions could possibly be splitted in two/tree/four test suits, so your decision tables will be smaller. May be it'll be helpfull to look at pairwise testing - technique of reducing number of test cases without leveraging decreasing of test coverage

How to keep track of performance testing

I'm currently doing performance and load testing of a complex many-tier system investigating the effect of different changes, but I'm having problems keeping track of everything:
There are many copies of different assemblies
Orignally released assemblies
Officially released hotfixes
Assemblies that I've built containing further additional fixes
Assemblies that I've build containing additional diagnostic logging or tracing
There are many database patches, some of the above assemblies depend on certain database patches being applied
Many different logging levels exist, in different tiers (Application logging, Application performance statistics, SQL server profiling)
There are many different scenarios, sometimes it is useful to test only 1 scenario, other times I need to test combinations of different scenarios.
Load may be split across multiple machines or only a single machine
The data present in the database can change, for example some tests might be done with generated data, and then later with data taken from a live system.
There is a massive amount of potential performance data to be collected after each test, for example:
Many different types of application specific logging
SQL Profiler traces
Event logs
DMVs
Perfmon counters
The database(s) are several Gb in size so where I would have used backups to revert to a previous state I tend to apply changes to whatever database is present after the last test, causing me to quickly loose track of things.
I collect as much information as I can about each test I do (the scenario tested, which patches are applied what data is in the database), but I still find myself having to repeat tests because of inconsistent results. For example I just did a test which I believed to be an exact duplicate of a test I ran a few months ago, however with updated data in the database. I know for a fact that the new data should cause a performance degregation, however the results show the opposite!
At the same time I find myself sepdning disproportionate amounts of time recording these all these details.
One thing I considered was using scripting to automate the collection of performance data etc..., but I wasnt sure this was such a good idea - not only is it time spent developing scripts instead of testing, but bugs in my scripts could cause me to loose track of things even quicker.
I'm after some advice / hints on how better to manage the test environment, in particular how to strike a balance between collecting everything and actually getting some testing done at the risk of missing something important?
Scripting the collection of the test parameters + environment is a very good idea to check out. If you're testing across several days, and the scripting takes a day, it's time well spent. If after a day you see it won't finish soon, reevaluate and possibly stop pursuing this direction.
But you owe it to yourself to try it.
I would tend to agree with #orip, scripting at least part of your workload is likely to save you time. You might consider taking a moment to ask what tasks are the most time consuming in terms of your labor and how amenable are they to automation? Scripts are especially good at collecting and summarizing data - much better then people, typically. If the performance data requires a lot of interpretation on your part, you may have problems.
An advantage to scripting some of these tasks is that you can then check them in along side the source / patches / branches and you may find you benefit from organizational structure of your systems complexity rather than struggling to chase it as you do now.
If you can get away with testing only against a few set configurations that will keep the admin simple. It may also make it easier to put one on each of several virtual machines which can be quickly redeployed to give clean baselines.
If you genuinely need the complexity you describe I'd recommend building a simple database to allow you to query the multivariate results you have. Having a column for each of the important factors will a allow you to query in for questions like "what testing config had the lowest variance in latency?" and "which test database allowed the raising of most bugs?". I use sqlite3 (probably through the Python wrapper or the Firefox plug-in) for this kind of lightweight collection, because it keeps maintenance overhead relatively low and allows you to avoid perturbing the system under test too far, even if you need to run on the same box.
Scripting the tests will make them quicker to execute and permit results to be gathered in an already-ordered way, but it sounds like your system may be too complex to make this easy to do.

Resources