Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I'm very beginner for unit testing in node.js, I want to know what is the best practice of writing unit testing in node.js for example 'it' method how many assert test cases I can have, Is there any standard of writing only one test case in single it method. Please give me an idea to write the unit test case.
Thanks in advance.:)
Test one part of functionality in one it() call and only use multiple assertions if really needed.
If you use 2 assertions in one it() call, failure of the first one will block the second one from being executed, thus hiding part of your tests and therefore preventing you from getting a full view on a possible error.
Study how to use before/after and beforeEach/afterEach inside a describe block - those will really help you to only perform tests on small parts of your code in every it(). See the 'Hooks' chapter in the mocha documentation.
Optionally create your own set of helper functions to prepare set up your code for a single test to prevent (too much) code duplication in your tests - I believe code duplication in tests is just as bad as code duplication in your 'real' code.
This free tutorial explains Chai and Mocha quite well, and how to structure it.
While Mocha is a regular test framework, Chai is an expectation framework. The key difference is syntactically sugary how tests are formulated (the use of it() for test cases), which I personally find confusing, too.
For a starter, you should probably stick with mocha. It might help you to get some wording straight:
Mocha is a test framework (so you have a defined outer set of functionality, in which to fill in the gaps, aka place your tests, etc), whereas
Unit.js is a test library, so it offers a bunch of functions (like all kind of asserts), but you are driving your script. (No test suites, test rnning)
The mocha.js framework uses the unit.js test functions (see here).
Related
Is it possible in cleanup method in Spock check is feature (or even better - current iteration of feature) passed or failed? In java's JUnit/TestNG/Cucumber it can be done in one line. But what about Spock?
I've found similar questions here:
Find the outcome/status of a test in Specification.cleanup()
Execute some action when Spock test fails
But both seems to be overcomplicated and it was years ago. Is there any better solution?
Thanks in advance
Update: main goal is to save screenshots and perform some additional actions for failed tests only in my geb/spock project
It is not over-complicated IMO, it is a flexible approach to hooking into events via listeners and extensions. The cleanup: block is there to clean up test fixtures, as the name implies. Reporting or other things based on the test result are to be done in a different way.
Having said that, the simple and short answer to your question is: This still is the canonical way to do that. By the way, you didn't tell us what you want to do with the test result in the clean-up block. This kind of thing - explaining how you want to do something but not explaining why (i.e. which problem you are trying to solve) is called the XY problem.
For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
A general question regarding the testing.
Should the test cases be written without the steps? My lead test cases are written assuming you know all requirements and system. So no need to write the steps because as a QA person, you know the steps to test the requirement. And for executing a test case, you can go through the BRD/SRS again.
Won't this be a double effort?
Looking the requirement again in BRD which is there in 2-3 pages non-consecutively.
Not sufficient for any new tester.
Tester can forget the steps needed to test a requirement
Advantages of writing steps:
Don't have to look the BRD again.
Proper test cases with steps can be used by any tester.
Proper coverage.
So steps are required for preparing proper test cases? Are there any standards/rules of thumb for writing test cases at the original level?
Well, if you are designing the test cases, you should have the steps. As one of the tester in the team, you may not be the only person to cover all the test cases in a product. And any tester may test any module of the product. So, for a tester who is not familiar with the module for which you have written the cases, it might be very difficult if there is no STEPS.
Writing Test Cases saves a lot of time during the Regression Testing and Retesting. And it is difficult for the tester to remember all the test cases, if the project is of long term.
Test cases are all about steps! Each test plan should have detail description of the environment in which the test cases have to be run/executed and each test case should have detailed steps!
This way nothing is ambiguous and when people working on project change, there are no question marks left.
No matter what your seniors say, please include all detailed steps and environment details in the test plan (and test cases) so that nothing is assumed ever!
No, you should not write the test cases without the steps because:
It's an essential part of the test case. Without it, you can't understand the test cases.
When you hand over the tests, it will be easier for the other person to execute those test cases.
If an issue occurs in the future and your PM asks what steps did you perform, the steps will be proof that you tested the features.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
Working on a large and complex application, I wonder where and whether we should be storing scenarios to document how the software works.
When we discuss problems with an existing feature, it's hard to see what we have already done and it would be hard to look back with a scrum tool such as TFS. We have some tests but these are not visible to the product owner. Should we be looking to pull out some vast story / scenario list, amending / updating as we go or is this not agile.
We have no record of how the software works other than the code, some unit tests,some test cases and a few out of date user guides.
We tend to use our automated acceptance tests to document this. As we work on a user story we also develop automated tests and this is part of our Definition of Done.
We use SpecFlow for the tests and these are written as Given, When, Then scenarios that are easy to read and understand and can be shared with the product owners.
These tests add a lot of value as they are our automated regression suite so regression testing is quicker and easier, but as they are constantly kept up to date as we develop new stories they also act as documentation of how the system works.
You might find it useful to have a look at a few blogs around Specification by Example which is essentially what we are trying to do.
A few links I found useful in the past are:
http://www.thoughtworks.com/insights/blog/specification-example
http://martinfowler.com/bliki/SpecificationByExample.html
Apart from the tests we used also a Wiki for documentation. Especially the REST API was documented with request/response examples but also other software behaviour (results from long discussions, difficult to remember stuff).
Since you want to be able to match a description of what you've done to the running software, then it sounds like you should put that in version control along with the software. Start with a docs/ directory, then add detail as you need it. I do this frequently, and it just works. If you want to make this web-servable, then set up a web server somewhere to check out the docs every so often and point the document root at the working copy docs/ directory.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Assertions provided by node.js assert for unit-testing are very limited. Even before I've written the first test I was already creating a few of assertions as it was clear I will keep re-using them.
Could you recommend some good library of assertions to test for common javascript situations (object structures, objects classes, etc., etc.)?
Ideally it should integrate well with nodeunit (or, better, extend it's assertions) - my assertions do not, I have to pass them test as an extra variable...
The only one I've seen is Chai. What could you say about it?
It's also somewhat a matter of preference — whether you prefer to test with the assert syntax or BDD-style assertions (smth.must.equal(...)).
For the assert style, Chai's assert may work well. It has more built-in matchers that Node's own assert module.
If you find the BDD-style more readable and fluent, all three do that:
Chai.js.
Must.js by yours truly.
Should.js.
They differ primarily by the simplicity or complexity of their API when it comes to various matchers. Their essential equality assertions, though, are interchangable — foo.must.equal(42) or foo.should.equal(42).
There's one thing you need to be aware when picking Chai.js and Should.js that I argue is a fundamental design mistake — their practice of asserting on property access as opposed to calling the matcher as a function. I've written a critique of asserting on property access and how it may cause false positives in tests.
I use my very own assertion library, node-assertthat. It's specialty is its syntax which looks very fluent and (IMHO) is very readable (inspired by NUnit for .NET), e.g.:
var actual = [...],
expected = [...];
assert.that(actual, is.equalTo(expected));
Basically it works very well, but there are not too many asserts implemented yet. So whether it is "good" or not I won't decide - that's up to you.
It makes use of a comparison library which provides things such as comparing objects by structure and some other nice things: compare.js.
E.g., if you have to objects and you want to know if they are equal (by their values), you can do
cmp.equal(foo, bar)
or short as:
cmp.eq(foo, bar)
You can also compare objects by structure, e.g. check whether two objects implement the same interface. You could do this like
cmp.equalByStructure(foo, bar)
or short as:
cmp.eqs(foo, bar);
Again, I'll let you decide whether it's "good", but at least I am quite comfortable with using both.
PS: I know that StackOverflow is no place to advertise your own projects, but I think that in this case the answer forces me to do this, as the answer to 'could you recommend' is 'my own tooling' in this case as for ME it is the best fit. Please don't consider this post as spam hence.
Chai is great. I’ve tried quite a few different setups for both Node and browser testing but the only one that satisfies me is Mocha + Chai + Sinon. But choosing a assertion library is also a matter of style, I personaly like chai.expect with it’s chained API, and it has pretty must every methods you need : type validation, object property checking, exceptions… I also find it very flexible.
You might be interested in Hamjest, a JavaScript matcher library based on Hamcrest.
I provides a framework agnostic library of assertions and matchers that can be used with nodeunit, mocha, jasmin and others.
It has two main advantages over Chai, Jasmin and similar frameworks:
Matchers can be nested and combined to create very expressive assertions.
Assertion errors describe the reason for the mismatch in great detail (e.g. which property did not match, which element was missing, etc.) instead of just repeating the assertion.
Disclaimer: I'm the main author of Hamjest.
Expect is a easy-to-use extendible assertion library for NodeJS and the browser. I have used it a couple times with Mocha and I can say it has any assertion you need. You can learn how to use it here. Example:
var pi = Math.PI;
expect(pi)
.toExist()
.toBeLessThan(4)
.toBeGreaterThan(3);