can we use cucumber custom formatters to init and clean data? - cucumber

I'm using cucumber for testing my application. I have to set up large data for a feature and clean up after FEATURE is complete. After doing some research over web, I found out there are hooks only for scenarios but not for before and after hooks for features.
Also, I found that cucumber notifies a formatter on its execution life cycle.
So, the question is, can I use a custom formatter and listen to before_feature and after_feature events to init and clean data? Is it allowed?
Thanks,
mkalakota

No, you cannot use a formatter for this. If you are trying to set up the data, then run many scenarios, then clean up the data, be aware that this makes your scenarios very fragile. Instead what you should do is setup the data for each scenario and clean it up at the end. You can do this very easily with background. e.g.
Feature: Lge data test
Background:
Given I have lge data
Scenario: foo
...
Scenario: bar
You would be better of making the loading of the lge data set fast (use SQL dump), and only using it when you absolutely have too. Feature hooks are an anti-pattern, which is why Cucumber doesn't support them.

Related

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

How to use data driven framework in cucumber to access external files such as excel or data base

I'd like to perform my tests using Cucumber + Excel to store my data. I don't want to keep my data stored on the procedure files. Is there a way to do this?
Yes, this is possible.
What you need to do is to implement reading the data in your step implementations.
If you are using the data from Excel to setup the system under test, then read in the steps that prepare the system.
If you are using steps for verifying the outcome, then read the Excel files in the steps you execute from your then steps.
There, at least, is one possible issue with doing it like this. It may not be easy to validate your scenarios by reading the feature file since the scenarios depend on data that may be hard to read at the same time. So while it may seem like a great idea to combine cucumber and Excel, it may not be so great.
Cucumber is a tool for automating BDD. At the core of BDD is communication between devs, tester and business people. The feature files are used for communication by describing, easy to understand and agree upon, examples. These examples might be obfuscated using the Cucumber + Excel approach.
This is a route I personally would avoid.

How to write feature file and when to convert them to step definition to adapt to a changing business requirement?

I am working on a BDD web development and testing project with other team members.
On top we write feature files in gherkin and run cucumber to generate step functions. At bottom we write Selenium page models and action libraries scripts. The rest is just fill in the step functions with Selenium script and finally run cucumber cases.
Sounds simple enough.
The problem comes starting when we write feature files.
Problem 1: Our client's requirement keeps changing every week as the project proceed, in terms of removing old ones and adding new ones.
Problem 2: On top of that, for some features, detailed steps keep changing too.
The problem gets really bad if we try to generate updated step functions based on updated feature file every day. There are quite some housecleaning to do to keep step functions and feature files in sync.
To deal with problem 2, I remembered that one basic rule in writing gherkin feature file is to use business domain language as much as possible. So I tried to persuade the BA to write the feature file a little more vague, and do not include too many UI specific steps in it, so that we need not to modify feature files/step functions often. But she hesitate 'cause the client's requirement document include details and she just try to follow.
To deal with problem 1, I have no solution.
So my question is:
Is there a good way to write feature file so that it's less impacted by client's requirement change? Can we write it vague to omit some details that may change (this way at least we can stabilize the step function prototype), and if so, how far can we go?
When is a good time to generate the step definitions and filling in the content? From the beginning, or wait until the features stabilize a little? How often should we do it if the feature keep changing? And is there a convenient way to clean the outdated step functions?
Any thoughts are appreciated.
Thanks,
If your client has specific UI requirements for which you are contracted to provide automated tests, then you ought to be writing those using actual test automation tools. Cucumber is not a test automation tool. If you attempt to use it as such, you are simply causing yourself a lot of pain for naught.
If, however, you are only contracted to validate that your application complies with the business rules provided by your client, during frequent and focused discovery sessions with them, then Cucumber may be able to help you.
In either case, you are going to ultimately fail, if there's no real collaboration with your client. If they're regularly throwing new business rules, or new business requirements over a transome through which you have limited or no visibility, then you are in a no-win situation.

How to customize the fork process?

I would like to execute extra actions after a successful fork .
(e.g. automate the creation of a CruiseControl.net project).
Which code should I modify? Where should I start from?
There are many options to implement that, according to the source code.
the first option should be: modify app_services/projects/fork_service.rb
another option could be: implement a project_service (e.g. model, worker), which will be binded to an external (unexisting) API which will manage the complexity of the project creation
(more links when my reputation will be high enough ;-))

In vows, is there a `beforeEach` / `setup` feature?

Vows has an undocumented teardown feature, but I cannot see any way to setup stuff before each test (a.k.a. beforeEach).
One would think it would be possible to cheat and use the topic, but a topic is only run once (like teardown), whereas I would like this to be run before each test. Can this not be done in vows?
You can create a topic that does the setup, and the tests come after that. If you want it to run multiple times, create a function and have multiple topics that call that function.
It is a bit convoluted because it is not explicit, you should definitely consider mocha not only because it is actively maintained, but it makes tests easier to read than what you end up with when using vows.

Resources