What is role of the suite function in Mocha? - node.js

I read the book Web Development with Node.js and Express. And there is used the function suite().
var assert = require('chai').assert;
suite('tests', function () {
// set of tests
});
I don't understand where it comes from. I can't find any documentation about this function.
Seems that it looks and has same functionality like the describe() function in Mocha.

Mocha supports several different ways of writing tests (interfaces) so that you can choose a style that suits your methodology. describe() and suite() essentially do the same thing: they let you label and group together a set of tests; the grouped tests are organised under a common label in the output and can use common setup and teardown functions.
The choice of which function to use depends on whether you are using a Behaviour Driven Development (BDD) methodology (where you describe() the behaviour you want it() to do), or Test Driven Development (TDD), where you define a suite() of test()s you want your code to pass. You should choose whichever style you feel makes your code more readable.
Here's a blog explaining the Difference Between TDD and BDD with regard to test design.

Documentation can be found on the mocha website: https://mochajs.org/#tdd
suite is the TDD version of describe. You generally use it to describe and isolate the functionality/features/behaviour that you are going to test.

Related

How to avoid code redundancy in large amounts of Node.JS BDD tests

For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.

What is the value add of BDD?

I am now working on a project where we are using cucumber-jvm to drive acceptance tests.
On previous projects I would create internal DSLs in groovy or scala to drive acceptance tests. These DSLs would be fairly simple to use such that even a non-techie would be able to write tests with a little bit of guidance.
What I see is that BDD adds another layer of indirection and semantic sugar to the tests, but I fail to see the value-add, especially if the non-techies can use an internal DSL.
In the case of cucumber, stepDefs seem to scatter the code that drives any given test over several different classes, making the test code difficult to read and debug outside the feature file. On the other hand putting all the code pertaining to one test in a single stepDef class discourages re-use of stepsDefs. Both outcomes are undesirable, leaving me asking what is the use of natural language worth all this extra, and unintuitive indirection?
Is there something I am missing? Like a subtle philosophical difference between ATDD and BDD? Does the former imply imperative testing whereas the latter implies declarative testing? Do these aesthetic differences have intrinsic value?
So I am left asking what is the value add to justify the deterioration in the readability of the actual code that drives the test. Is this BDD stuff actually worth the pain? Is the value add more than just aesthetic?
I would be grateful if someone out there could come up with a compelling argument as to why the gain of BDD surpasses the pain of BDD?
What I see is that BDD adds another layer of indirection and semantic sugar to the tests, but I fail to see the value-add, especially if the non-techies can use an internal DSL.
The extra layer is the plain language .feature file and at the point of creation it has nothing to do with testing, it has to do with creating the requirements of the system using a technique called specification by example to create well defined stories. When written properly in the business language, specification by example are very powerful at creating a shared understanding. This exercise alone can both reduce the amount of rework and can find defects before development starts. This exercise is otherwise known as deliberate discovery.
Once you have a shared understanding and agreement on the specifications, you enter development and make those specifications executable. Here is where you would use ATDD. So BDD and ATDD are not comparable, they are complimentary. As part of ATDD, you drive the development of the system using the behaviour that has been defined by way of example in the story. the nice thing you have as a developer is a formal format that contains preconditions, events, and postconditions that you can automate.
Here on, the automated running of the executable specifications on a CI system will reduce regression and provide you with all the benefits you get from any other automated testing technique.
These really interesting thing is that the executable specification files are long-lived and evolve over time and as you add/change behaviour to your system. Unlike most Agile methodologies where user stories are throw-away after they have been developed, here you have a living documentation of your system, that is also the specifications, that is also the automated test.
Let's now run through a healthy BDD-enabled delivery process (this is not the only way, but it is the way we like to work):
Deliberate Discovery session.
Output = agreed specifications delta
ATDD to drive development
Output = actualizing code, automated tests
Continuous Integration
Output = report with screenshots is browsable documentation of the system
Automated Deployment
Output = working software being consumed
Measure & Learn
Output = New ideas and feedback to feed the next deliberate discover session
So BDD can really help you in the missing piece of most delivery systems, the specifications part. This is typically undisciplined and freeform, and is left up to a few individuals to hold together. This is how BDD is an Agile methodology and not just a testing technique.
With that in mind, let me address some of your other questions.
In the case of cucumber, stepDefs seem to scatter the code that drives any given test over several different classes, making the test code difficult to read and debug outside the feature file. On the other hand putting all the code pertaining to one test in a single stepDef class discourages re-use of stepsDefs. Both outcomes are undesirable, leaving me asking what is the use of natural language worth all this extra, and unintuitive indirection?
If you make the stepDefs a super thin layer on top of your automation testing codebase, then it's easy to reuse the automation code from multiple steps. In the test codebase, you should utilize techniques and principles such as the testing pyramid and the shallow depth of test to ensure you have a robust and fast test automation layer. What's also interesting about this separation is that it allows you to ruse the code between your stepDefs and your unit/integration tests.
Is there something I am missing? Like a subtle philosophical difference between ATDD and BDD? Does the former imply imperative testing whereas the latter implies declarative testing? Do these aesthetic differences have intrinsic value?
As mentioned above, ATDD and BDD are complimentary and not comparable. On the point of imperative/declarative, specification by example as a technique is very specific. When you are performing the deliberate discovery phase, you always as the question "can you give me an example". In that example, you would use exact values. If there are two values that can be used in the preconditions (Given) or event (When) steps, and they have different outcomes (Then step), it means you have two different scenarios. If the have the same outcome, it's likely the same scenario. Therefore as part of the BDD practice, the steps need to be declarative as to gain the benefits of deliberate discovery.
So I am left asking what is the value add to justify the deterioration in the readability of the actual code that drives the test. Is this BDD stuff actually worth the pain? Is the value add more than just aesthetic?
It's worth it if you are working in a team where you want to solve the problem of miscommunication. One of the reasons people fail with BDD is because the writing and automation of features is lefts to the developers and the QA's, and the artifacts are no longer coherent as living specifications, they are just test scripts.
Test scripts tell you how a system does a particular thing but it does not tell you why.
I would be grateful if someone out there could come up with a compelling argument as to why the gain of BDD surpasses the pain of BDD?
It's about using the right tool for the right job. Using Cucumber for writing unit tests or automated test scripts is like using a hammer to put a screw into wood. It might work, but it's never pretty and it's always painful!
On the subject of tools, your typical business analyst / product owner is not going to have the knowledge needed to peek into your source control and work with you on adding / modifying specs. We created a commercial tool to fix this problem by allowing your whole team to collaborate over specifications in the cloud and stays in sync (realtime) with your repository. Check out Simian.
I have also answered a question about BDD here that may be of interest to you that focuses more on development:
Should TDD and BDD be used in conjunction?
Cucumber and Selenium are two popular technologies. Most of the organizations use Selenium for functional testing. These organizations which are using Selenium want to integrate Cucumber with selenium as Cucumber makes it easy to read and to understand the application flow.    Cucumber tool is based on the Behavior Driven Development framework that acts as the bridge between the following people: 
Software Engineer and Business Analyst. 
Manual Tester and Automation Tester. 
Manual Tester and Developers. 
Cucumber also benefits the client to understand the application code as it uses ​Gherkin language which is in Plain Text. Anyone in the organization can understand the behavior of the software.  The syntax's of Gherkin is in the simple text which is ​readable and understandable​.

Creating Test suites in Spring IDE for the Spock Test specs

I have hundreds of test specifications written in Spock. All of these are functional tests and can be run independently. But I have come across a situation where I need to run a specific test before running some other test.
This was very easy to achieve using Junit Test Suite and it was very straight forward in Eclipse. But since all my tests are groovy tests there is no easy way to create a Test Suite in Spring IDE for the spock tests (written in Groovy).
Can someone please share some ideas as to how we can create a Test suite and run some specific tests and also define the order of tests.
Any help would be much appreciated.
Spock specifications are valid JUnit tests (or suites) as well. That's why they are recognized by tools such as STS. You should be able to add it to the test suites as well as other JUnit test.
On the other hand it doesn't sound as a good practise if your tests depend on execution order.
If certain tasks needs to be performed before the test execution, it should be placed in setup() method. If that logic is common to more than one test, consider extracting it to the parent class.
If all you need is sequential execution of methods within a spec, have a look at #spock.lang.Stepwise, which is handy for testing workflows. Otherwise, you have the same possibilities as with plain JUnit: you can use JUnit (4) test suites, model test suites in your build tool of choice (which might not help within STS), or define test suites via Eclipse run configurations. I don't know how far support for the latter goes, but at the very least, it should allow you to run all tests in a package.
Although I think that it won't allow you to specify the order of the tests, you could use Spock's Runner configuration or #IgnoreIf/#Require built-in extensions. Have a look at my response to a similar question. It's probably also worth having a look at RunnerConfiguration javadoc as it shows that you can include classes directly instead of using annotations.
If the tests you want to run in a specific order are part of the same spock Specification, then you can use the #Stepwise annotation to direct that the tests (feature methods) are executed in the order they appear in the Specification class.
As others mentioned, its best to avoid this dependency if you can because of the complexity it introduces. For example, what happens if the first test fails? Does that leave the system in a undefined state for the subsequent tests? So it would be better to prevent the intra-test dependencies with setup() and cleanup() methods (or setupSpec() and cleanupSpec()).
Another option is to combine two dependent tests into a single multi-stage test with multiple when:/then: block pairs in sequence.

Should I switch from Vows to Mocha?

I'm trying to decide whether to switch from Vows to Mocha for a large Node app.
I've enjoyed almost all of the Vows experience - but there is just something strange about the argument passing. I always have to scratch my head to remember how topics work, and that interferes with the basics of getting the tests written. It is particularly problematic on deeply nested asynchronous tests. Though I find that combining Vows with async.js can help a little.
So Mocha seems more flexible in its reporting. I like the freedom to choose the testing style & importantly it runs in the browser too, which will be very useful. But I'm worried that it still doesn't solve the readability problem for deeply nested asynchronous tests.
Does anyone have any practical advice - can Mocha make deeply nested tests readable? Am I missing something?
Mocha is ace. It provides a done callback, rather than waitsFor that jasmine provides. I cant speak about migration from vows but from jasmine it was straight forward. Inside you mocha test function you can use async if you want (or Seq etc.. if you want to be legacy) though if you required nested callbacks at that point its an integration test, which might make you think about the granularity of your tests.
OT: 100% test coverage rarely delivers any value.
Deeply nested tests are solved by using flow control in your unit test suite.
Vows does not allow this easily because the exports style requires creating flow control libraries that support this.
Either write a flow control library for vows or switch to mocha and re-use an existing flow control library.

In vows, is there a `beforeEach` / `setup` feature?

Vows has an undocumented teardown feature, but I cannot see any way to setup stuff before each test (a.k.a. beforeEach).
One would think it would be possible to cheat and use the topic, but a topic is only run once (like teardown), whereas I would like this to be run before each test. Can this not be done in vows?
You can create a topic that does the setup, and the tests come after that. If you want it to run multiple times, create a function and have multiple topics that call that function.
It is a bit convoluted because it is not explicit, you should definitely consider mocha not only because it is actively maintained, but it makes tests easier to read than what you end up with when using vows.

Resources