I am looking for a stable, free, easy to use, tool for generating decision table. TestCaseGenerator is exactly what I'm looking for, but is far from being stable, and if I have thousands of test cases it stops generating the test case. DecisionTableCreator is another example, but is not working if you have too many conditions.
I spent long time searching for such tool which I am sure must exist (I don't think TDD can do without such tool).
10x,
Sharon
The decision-table code generator, CCIDE ( http://twysf.users.sourceforge.net/ ) might be what you're looking for.
TestCaseGenerator is exactly what I'm looking for, but is far from being stable, > and if I have thousands of test cases it stops generating the test case.
How much cases are you need? I'm using http://decision-table.com - it could generate 16300 cases on my low-end computer. I guess more RAM could give you more cases.
By the way, why it is need so much test cases? I mean, 15+ conditions could possibly be splitted in two/tree/four test suits, so your decision tables will be smaller. May be it'll be helpfull to look at pairwise testing - technique of reducing number of test cases without leveraging decreasing of test coverage
Related
We have 30 000 Scenarios and trying to use cucumber with maven(No test NG), and really big project. Dependent scenarios will stop us to pick only one part of Test Suit or one Test Case from Manual Test Plan on the other hand Independent scenarios will significantly increase time for test execution (if you start regression than it is almost waist of time).
Is answerer something between?
e.g.
Use independent where can, and divide dependent into functionalities and put them into separate feature files which are based on functionalities?
What is the best practice for writing feature files for big projects?
Dependent VS Independent
Functional Feature Files vs US feature files.
That is question for you and your team, you have to get together and decide what is the best solution for you guys.
Someone here can give you his point of view, but you guys know what is best for your project needs.
In general, it is not a good idea to have dependent tests, as they are harder to maintain and if dependency breaks then all of your tests will fail and produce false negatives. Also if your important factor is time when doing automated regression testing then perhaps find middle ground, and your solution will be there.
I have a Cucumber feature file with over 66 scenarios! The title of the feature file does represent what the scenarios are all about.
But 66 (200 steps) feels like quite a large number. Does this suggest that my feature title is too broad?
What is the maximum number of scenarios one should have in a single feature file (from a best practice point of view)?
Thanks in advance :)
Although I don't know your system and feature file, I can surely say that there is a misunderstanding of scenarios and their purpose.
The purpose of scenarios is to bring a clarification for the feature by examples. Usually, people tend to write scenarios to cover all use cases. If you do scenarios that way, the feature loses the ability to be human-readable.
Keep in mind that acceptance tests are expensive to write and expensive to change. Write the minimum scenarios. If there is a scenario that doesn't bring any additional value for the understanding of the feature, then that scenario shouldn't be there. Move all use cases into a lower level of testing - unit tests.
In most cases, the feature has the number of scenarios in units, or tens if it's a complex feature.
Edit: If the number of scenarios would go close to 10, I would rather split the feature file into more files describing deeper part of the feature.
Yes, 200 is an unusually large number of scenarios for a single file. It is likely to be hard to find a particular scenario in the file or to keep it organized. (Multiple smaller files are easier to organize; a directory of files is easier for people to understand and maintain than a long file with comments or worse yet some uncommented ordering scheme.) It will also take a long time to run the file, which will make development difficult.
More importantly, 200 scenarios for a single feature might mean that the feature is extremely complex or that it is very broad. In either case it can probably be broken up into multiple smaller feature files. It also might mean that there are too many scenarios. There might be a scenario for every value of some variable (it might be sufficient to write a single scenario and not worry about different values) or a scenario for every detail of every feature (it might be better to write unit tests, which are smaller and more focused and faster, for details).
But, as with any software metric about the size of a piece of code, there might be a typical size, but every problem is different. Your feature might really be that complex. We can't say without understanding the domain and seeing the feature file.
A common workflow in scientific computing is to first write code (perhaps a simulation), run it and analyse its results, then make modifications based on what the previous round of results show. This cycle may go round tens, possibly even hundreds of times before the project is finished.
A key problem of this development cycle is one of reproducibility. As I have gone through this cycle, I will have produced results, graphs, and various other output. I want to be able to take any graph (from yesterday, last week, month, or longer) and reliably reconstruct the code and environment which were used to produce this. How can I solve this problem? The "obvious" solution appears to be one of organisation and recording everything, however this has the potential to create much additional work. I'm interested in the balance of achieving this without handicapping productivity.
http://ipython.org/notebook.html
for people who want to share reproducible research.
http://jupyter.org/
Not just python, there are many languages supported.
Recently, i was experimenting with julia language,
this is one of the advised tutorials.
It's using IJulia which is based on IPython, very nice intro.
I have been trying to understand how ACO optimization can be implemented with data parallelism. I have read some content after searching in Google. I only need the basic idea in simple way. Most of the papers are talking about everything else instead of the main thing in simple words.
What I understood so far is, we will make it work parallel by using multi-tasking(threading). But am not sure what each thread would do or how we could separate it into threads without causing trouble.
Does it means that we should create separate thread for each ants? But that would cause lots of threads to be created! So if there are 200 ants, then 200 threads?
Am still having confusion at this data parallelism topic in ACO. I would really love to hear in simple words on how we would implement it parallely.
A few simple ideas to run ACO in parallel
Since you have already read up on ACO, here are a few simple ideas on ways to run ACO in parallel. Rather than getting caught up in multiple-threads and mutli-tasking, it might be helpful to think in terms of 'parallel compute resources' at your disposal.
ACO is one case of Agent-Based Simulation (ABS), and ABS lends itself particularly well to parellization.
Simple Options
Option 1. Run a full version of ACO in each of the parallel resources.
Code your ACO algorithm, run it in parallel fashion. (Since there is a stochastic element to the algorithm, you can then look for the 'best' solution for your problem.)
Option 2. To explore effects of varying ACO parameters
Like any simulation approach, any ACO implementation has a large number of runtime parameters: Number of vertices, time to run, Number of ants, Pheromone evaporation rates, probability functions to choose path options and many more. When you mutliply these options, they add up to some large number of cases to be run. Divide up the work among your parallel compute resources.
The two options mentioned above are sometimes referred to as 'embarrassingly' parallel. Very easy to implement (think of it as a Design of Experiments) and you get back a whole matrix of results, and you can make conclusions by studying what effect the changes in the parameters had on the solution.
Option with solution sharing
Option 3: Master-Slave approach, with Partial Solution sharing
Going up one more level in complexity, we can use each node to contribute its 'knowledge/findings' to the overall problem solution. This is sometimes called a master-slave approach. The master is trying to solve the overall problem (Could be TSP, or some similar complex problem) and each 'slave' is solving some aspect of it, but with some fairly simple algorithm. The idea is that when combined they produce powerful results.
After a certain number of iterations, the solutions are passed back and forth, with 'bad' solutions thrown out. Some variant of the Map-Shuffle-Reduce paradigm would do that. The master evaluates the current best solution, and that is transferred back to each 'slave' node (Example: the latest overall pheromone levels are given to all the slave nodes). The next round of solving resumes.
Option 3 has tons of nuanced variations, and some people spend their entire lives improving various aspects of it.
Hope some of these ideas help.
It is said [Software Defect ReductionTop 10 List] that, 'about 40 to 50 percent of user programs contain nontrivial defects'.
What are some nontrivial defects and how to overcome them?
I would interpret "non-trivial" as "has a real impact on the user".
For instance, if a menu item has a typo in it, that would be a trivial defect. If your spreadsheet application crashed when it tried to save any sheet with the number "999" in, that would be non-trivial.
I'd be hugely surprised if the number was really as low as 40-50%. In my experience pretty much every signficant application has non-trivial defects, even if they're rarely encountered. (If I'm the only user in the world who uses the number 999 in a spreadsheet, the bug is still hugely important to me so I don't think it can be classed as trivial.)
As for "overcoming" defects - the normal barrage of unit tests, continuous build, automated integration tests, manual testing, making sure you have a really good user feedback system, and management who are willing to put resources into fixing bugs as well as creating new features.
Subjective, but:
Non trivial: defects that stop users doing their job, or that impact their productivity to a significant degree
Trivial: defects that just annoy users
Obviously there is a big grey area here, because what's annoying and trivial for one product might be annoying but non-trivial for another.
First, it is worth noting that most single defects are trivial: tests aim at discovering them.
So non trivial defects are generally a combination of two or more single defects, each of them being harmless alone (test input didn't trigger them).
A second step in non triviality is when time is part of the input/output space: specific dates or durations.
Then you can add discrepancies between assumptions and reality: compiler, target platform, inputs, ...
Shake all of that and may the force be with you...
Try to understand the other side first: Trivial defects. A trivial defect is either harmless or easy to fix (typo in a text in the UI, wrong color for a button, labels are not aligned perfectly).
Non-trivial defects are everything else: Performance problems, handling of the application, data corruption, etc. They are sometimes hard to find and often hard to fix.