In vows, is there a `beforeEach` / `setup` feature? - node.js

Vows has an undocumented teardown feature, but I cannot see any way to setup stuff before each test (a.k.a. beforeEach).
One would think it would be possible to cheat and use the topic, but a topic is only run once (like teardown), whereas I would like this to be run before each test. Can this not be done in vows?

You can create a topic that does the setup, and the tests come after that. If you want it to run multiple times, create a function and have multiple topics that call that function.
It is a bit convoluted because it is not explicit, you should definitely consider mocha not only because it is actively maintained, but it makes tests easier to read than what you end up with when using vows.

Related

Isolating scenarios in Cabbage

I am automating acceptance tests defined in a specification written in Gherkin using Elixir. One way to do this is an ExUnit addon called Cabbage.
Now ExUnit seems to provide a setup hook which runs before any single test and a setup_all hook, which runs before the whole suite.
Now when I try to isolate my Gherkin scenarios by resetting the persistence within the setup hook, it seems that the persistence is purged before each step definition is executed. But one scenario in Gherkin almost always needs multiple steps which build up the test environment and execute the test in a fixed order.
The other option, the setup_all hook, on the other hand, resets the persistence once per feature file. But a feature file in Gherkin almost always includes multiple scenarios, which should ideally be fully isolated from each other.
So the aforementioned hooks seem to allow me to isolate single steps (which I consider pointless) and whole feature files (which is far from optimal).
Is there any way to isolate each scenario instead?
First of all, there are alternatives, for example: whitebread.
If all your features, needs some similar initial step, maybe background steps are something to look into. Sadly those changes were mixed in a much larger rewrite of the library that newer got merged into. There is another PR which also is mixed in with other functionality and currently is waiting on companion library update. So currently that doesn't work.
Haven't tested how the library is behaving with setup hooks, but setup_all should work fine.
There is such a thing as tags. Which I think haven't yet been published with the new release, but is in master. They work with callback tag. You can look closer at the example in tests.
There currently is a little bit of mess. I don't have as much time for this library as I would like to.
Hope this helps you a little bit :)

How to avoid code redundancy in large amounts of Node.JS BDD tests

For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.

can we use cucumber custom formatters to init and clean data?

I'm using cucumber for testing my application. I have to set up large data for a feature and clean up after FEATURE is complete. After doing some research over web, I found out there are hooks only for scenarios but not for before and after hooks for features.
Also, I found that cucumber notifies a formatter on its execution life cycle.
So, the question is, can I use a custom formatter and listen to before_feature and after_feature events to init and clean data? Is it allowed?
Thanks,
mkalakota
No, you cannot use a formatter for this. If you are trying to set up the data, then run many scenarios, then clean up the data, be aware that this makes your scenarios very fragile. Instead what you should do is setup the data for each scenario and clean it up at the end. You can do this very easily with background. e.g.
Feature: Lge data test
Background:
Given I have lge data
Scenario: foo
...
Scenario: bar
You would be better of making the loading of the lge data set fast (use SQL dump), and only using it when you absolutely have too. Feature hooks are an anti-pattern, which is why Cucumber doesn't support them.

Implicitly registering unit tests in Haskell

I'm new to Haskell, with a C++ background. I'm doing some exercises in Haskell, and I want to implement them as a bunch of functions covered with unit tests, so the testing driver is my only app.
And with my background, I'm looking for something like GTest. HUnit is its analog in Haskell world. But need to explicitly register tests is really annoying - thats tedious and violates DRY principle.
So I was thinking about experimenting with custom testing framework.Seems that template Haskell can be used to automate providing assertion descriptions and registering tests within one module. But how can automatically collect all tests from all linked modules?
Of course, it is always possible to write build script that would grep sources and generate required code, but I wonder, if this can be done in Haskell only?
test-framework-th provides this functionality for test-framework. The simplest thing is to use the defaultMainGenerator function to collect all top-level definitions prefixed with case_ (HUnit) or prop_ (QuickCheck) into test groups.
If you have multiple test groups, you do still need to list them in a main entry point for your tests. There’s probably a way around that, and I guess that’s what you’re really asking about, but honestly I have found little need to break tests into more than a handful of modules. The effort needed to avoid repetition is sometimes less than the effort needed to maintain it.

Should I switch from Vows to Mocha?

I'm trying to decide whether to switch from Vows to Mocha for a large Node app.
I've enjoyed almost all of the Vows experience - but there is just something strange about the argument passing. I always have to scratch my head to remember how topics work, and that interferes with the basics of getting the tests written. It is particularly problematic on deeply nested asynchronous tests. Though I find that combining Vows with async.js can help a little.
So Mocha seems more flexible in its reporting. I like the freedom to choose the testing style & importantly it runs in the browser too, which will be very useful. But I'm worried that it still doesn't solve the readability problem for deeply nested asynchronous tests.
Does anyone have any practical advice - can Mocha make deeply nested tests readable? Am I missing something?
Mocha is ace. It provides a done callback, rather than waitsFor that jasmine provides. I cant speak about migration from vows but from jasmine it was straight forward. Inside you mocha test function you can use async if you want (or Seq etc.. if you want to be legacy) though if you required nested callbacks at that point its an integration test, which might make you think about the granularity of your tests.
OT: 100% test coverage rarely delivers any value.
Deeply nested tests are solved by using flow control in your unit test suite.
Vows does not allow this easily because the exports style requires creating flow control libraries that support this.
Either write a flow control library for vows or switch to mocha and re-use an existing flow control library.

Resources