Isolating scenarios in Cabbage - cucumber

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?

First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

Related

Preventing JHipster import-jdl from overwriting changes when updating entities

With my current workflow I have to check in git and manually read changes like added functions after every import-jdl that touches an entitiy I changed.
Is there a way to add functions to the classes JHipster creates without actually changing the files? Like code generation with anotations or extending JHipster created clasees? I feel like I am missing some important documentation from JHipster, I would be grateful for pointers in the right direction.
Thanks!
I faced this problem in one of my projects and I'm afraid there is no easy way to tell JHipster not to overwrite your changes.
The good news is you have two ways of mitigating this and both will make your life much easier.
Update your entities in a separate branch
The idea is to update your entities (execute the import-jdl command) in a different branch and then, once the whole process is finished merge the changes back to master.
This requires no extra changes to your code. The problem I had with this approach is that sometimes the merges were not trivial and I still had to go through a lot of code just to be sure that everything was still in place and working properly.
Do not change the generated code
This is known as the side-by-side practice. The general idea is that you never change the generated code directly, instead you put your custom code in new files and extend the original ones whenever possible.
This way you can update your entities and JHipster will never remove or modify your custom code.
There are two videos available that will teach you (with examples) how to manage this:
Custom and Generated Code Side by Side by Antonio Goncalves
JHipster side-by-side in practice by David Steiman
In my opinion this is the best approach.
I know this probably isn't the answer you were looking for, but to my knowledge there's no better way.

How to avoid code redundancy in large amounts of Node.JS BDD tests

For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.

Cucumber: Best practice for writing cucumber steps that are shared among different feature sets?

I'm new to cucumber as a testing suite. I notice that as I build out feature and write steps. Lets say as a bad example (since I'm working backwards) I write a bunch of stuff for creating posts that require a User.
I end up writing a bunch of User based steps (log in process etc) in a feature set mainly dedicated to Post features.
Is it best practice to later move steps into the appropriate feature set as tests get more complicated and features get added?
You have two parts to consider here.
Organize scenarios so they make sense. That is to place them in the proper feature files.
Organize the implementation of the steps so they make sense. That is, implement the steps in the proper source code files.
Your question boils down to "What makes sense in my context?".
It depends on your stakeholders, do they want all user facing scenarios in the same feature file or are they more interested in business facing scenarios that sometimes involve users? Organize the scenarios so your stakeholders are happy.
How should you organize the steps then? It depends on your developers and your ability to share state between step definitions that are implemented in different source code files.
My approach would probably be to start out small and let the suite grow. This would initially not involve sharing state between different classes during runtime. When the suite feels to large to handle, divide it in two parts that are as coherent as you can make them. When this gets to large, repeat the division again. You will, hopefully, end up with something that works well in your context.
Remember that your context and your product is unique. It probably deserves a unique solution that your team feel they can maintain.
Understandability and therefore manintainability is the most important property I can think of regarding the executable specification you are building.

How to write feature file and when to convert them to step definition to adapt to a changing business requirement?

I am working on a BDD web development and testing project with other team members.
On top we write feature files in gherkin and run cucumber to generate step functions. At bottom we write Selenium page models and action libraries scripts. The rest is just fill in the step functions with Selenium script and finally run cucumber cases.
Sounds simple enough.
The problem comes starting when we write feature files.
Problem 1: Our client's requirement keeps changing every week as the project proceed, in terms of removing old ones and adding new ones.
Problem 2: On top of that, for some features, detailed steps keep changing too.
The problem gets really bad if we try to generate updated step functions based on updated feature file every day. There are quite some housecleaning to do to keep step functions and feature files in sync.
To deal with problem 2, I remembered that one basic rule in writing gherkin feature file is to use business domain language as much as possible. So I tried to persuade the BA to write the feature file a little more vague, and do not include too many UI specific steps in it, so that we need not to modify feature files/step functions often. But she hesitate 'cause the client's requirement document include details and she just try to follow.
To deal with problem 1, I have no solution.
So my question is:
Is there a good way to write feature file so that it's less impacted by client's requirement change? Can we write it vague to omit some details that may change (this way at least we can stabilize the step function prototype), and if so, how far can we go?
When is a good time to generate the step definitions and filling in the content? From the beginning, or wait until the features stabilize a little? How often should we do it if the feature keep changing? And is there a convenient way to clean the outdated step functions?
Any thoughts are appreciated.
Thanks,
If your client has specific UI requirements for which you are contracted to provide automated tests, then you ought to be writing those using actual test automation tools. Cucumber is not a test automation tool. If you attempt to use it as such, you are simply causing yourself a lot of pain for naught.
If, however, you are only contracted to validate that your application complies with the business rules provided by your client, during frequent and focused discovery sessions with them, then Cucumber may be able to help you.
In either case, you are going to ultimately fail, if there's no real collaboration with your client. If they're regularly throwing new business rules, or new business requirements over a transome through which you have limited or no visibility, then you are in a no-win situation.

How to iterate over a cucumber feature

I'm writing a feature in cucumber that could be applied to a number of objects that can be programmaticaly determined. Specifically, I'm writing a smoke test for a cloud deployment (though the problem is with cucumber, not the cloud tools, thus stack overflow).
Given a node matching "role:foo"
When I connect to "automatic.eucalyptus.public_ipv4" on port "default.foo.port"
Then I should see "Hello"
The given does a search for nodes with the role foo does and the automatic.eucalyptus... And port come from the node found. This works just fine... for one node.
The search could retun multiple nodes in different environments. Dev will probably return one, test and integration a couple, and prod can vary. The given already finds all of them.
Looping over the nodes in each step doesn't really work. If any one failed in the When, the whole thing would fail. I've looked at scenarios and cucumber-iterate, but both seem to assume that all scenarios are predefined rather than programmatically looked up.
I'm a cuke noob, so I'm probably missing something. Any thoughts?
Edit
I'm "resolving" the problem by flipping the scenario. I'm trying to integrate into a larger cluster definition language to define repeatedly call the feature by passing the info as an environment variable.
I apologize in advance that I can't tell you exactly "how" to do it, but a friend of mine solved a similar problem using a somewhat unorthodox technique. He runs scenarios that write out scenarios to be run later. The gem he wrote to do this is called cukewriter. He describes how to use it in pretty good detail on the github page for the gem. I hope this will work for you, too.

Resources