How to test a catch clause if I can't reproduce the error during the test? - node.js

I'm using Jest ans Supertest in my Server Side Application. I was hoping to increase the coverage of my test by obviously testing every uncovered line, until I got blocked by this:
Activity.find()
.then((data) => {
res.json(data);
}).catch(e => res.status(400).send(e));
How can I test the catch clause if I can't reproduce that kind of error in my tests (or at least I don't know how)?

In unit-test your should test only your code and should not rely on 3rd party libraries. Of course, this advice should be applied reasonably (e.g. we must depend on the compiler and out-of-box libraries of the platform we are using).
Up to now it seemed reasonable not to stub json as it's a quite established library you can rely on. But as soon as you want test such behavior as thrown exceptions there is no other way than create a stub, inject it somehow into your code, and in test setup make it throw an exception when calling json.
As a side effect, this will decouple your code from a particular implementation of json and make your code more flexible. Of course decoupling from JSON implementations seems a bit over the top, but nevertheless shows how test-driven design works in this case (even though the code was first).

Related

Python test method for additional method call

I have a situation and i could not find anything online that would help.
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail and alert the developers to go rectify the difference.
I have a method that calls 4 other methods from other classes. Patching made it real easy for me to determine if a method has been called. However, let's say someone in my team decides to add a 5th method, the test will still pass. Assuming that no other method calls should be allowed inside, is there a way to test in python to make sure no other calls are made? Refer to example.py below:
example.py:
def example():
classA.method1()
classB.method2()
classC.method3()
classD.method4()
classE.method5() # we do not want this method in here, test should fail if it detects a 5th or more method.
Is there anyway to cause the test case to fail if any additional methods are added?
You can easily test (with mock or doing the mocking manually) that example() does not specifically calls classE.method5, but that's about all you can expect - it won't work (unless explicitely tested too) for ie classF.method6(). Such a test would require either parsing the example function's source code or analysing it's bytecode representation.
This being said:
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail
I'm afraid your understanding is a bit off - it's not about "changing the method", it's about "unexpectedly changing behaviour". IOW you should first test for behaviour (black box testing), not for implementation (white box testing). Now the distinction between "implementation" and "behaviour" can be a bit blurry depending on the context (you can consider that "calling X.y()" is part of the expected behaviour and it sometimes makes sense indeed), but the distinction is still important.
wrt/ your current use case (and without more context - ie why shouldn't the function call anything else ?), I'd personnaly wouldn't bother trying be that defensive and I'd just clearly document this requirement as a comment in the example() function itself so anyone editing this code immediatly knows what he should not do.

How to avoid code redundancy in large amounts of Node.JS BDD tests

For the last few months, I was working on the backend (REST API) of a quite big project that we started from scratch. We were following BDD (behavior-driven-development) standards, so now we have a large amount of tests (~1000). The tests were written using chai - a BDD framework for Node.JS, but I think that this question can be expanded to general good practices when writing tests.
At first, we tried to avoid code redundancy as much as possible and it went quite well. As the number of lines of code and people working on the project grew it was becoming more and more chaotical, but readable. Sometimes minor changes in the code that could be applied in 15 minutes caused the need to change e.g. mock data and methods in 30+ files etc which meant 6 hours of changes and running tests (extreme example).
TL:DR
We want to refactor now these BDD tests. As an example we have such a function:
function RegisterUserAndGetJWTToken(user_data, next: any){
chai.request(server).post(REGISTER_URL).send(user_data).end((err: any, res: any) => {
token = res.body.token;
next(token);
})
}
This function is used in most of our test files. Does it make sense to create something like a test-suite that would contain this kind of functions or are there better ways to avoid redundancy when writing tests? Then we could use imports like these:
import {RegisterUserAndGetJWTToken} from "./test-suite";
import {user_data} from "./test-mock-data";
Do you have any good practices that you can share?
Are there any npm packages that could be useful (or packages for
other programming languages)?
Do you think that this approach has also downsides (like chaos when
there would be multiple imports)?
Maybe there is a way to inject or inherit the test-suite for
each file, to avoid imports and have it by default in each file?
EDIT: Forgot to mention - I mean integration tests.
Thanks in advance!
Refactoring current test suite
Your principle should be raising the level of abstraction in the tests themselves. This means that a test should consist of high-level method calls, expressed in domain language. For example:
registerUser('John', 'john#smith.com')
lastEmail = getLastEmailSent()
lastEmail.receipient.should.be 'john#smith.com'
lastEmail.contents.should.contain 'Dear John'
Now in the implementation of those methods, there could be a lot of things happening. In particular, the registerUser function could do a post request (like in your example). The getLastEmailSent could read from a message queue or a fake SMTP server. The thing is you hide the details behind an API.
If you follow this principle, you end up creating an Automation Layer - a domain-oriented, programmatic API to your system. When creating this layer, you follow all the good design principles, like DRY.
The benefit is that when a change in the code happens, there will be only one place to change in the test code - in the Automation Layer, and not in the test themselves.
I see that what you propose (extracting the RegisterUserAndGetJWTToken and test data) is a good step towards creating an automation layer. I wouldn't worry about the require calls. I don't see any reason for not being explicit about what our test depends on. Maybe at a later stage some of those could be gathered in larger modules (registration, emailing etc.).
Good practices towards a maintainable test suite
Automate at the right level.
Sometimes it's better to go through the UI or REST, but often a direct call to a function will be more sensible. For example, if you write a test for calculating taxes on an invoice, going through the whole application for each of the test-cases would be an overkill. It's much better to leave one end-to-end test see if all the pieces act together, and automate all the specific cases at the lowest possible level. That way we get both good coverage, as well as speed and robustness of the test-suite.
The guiding principle when writing a test is readability.
You can refer to this discussion for a good explanation.
Treat your test helper code / Automation Layer with the same care as you treat your production code.
This means you should refactor it with great care and attention, following all the good design principles.

Should my Node.js tests use HTTP requests

I have always thought in "Tests" as testing the role thing. I have seen that in many (not to say all) cases, tests are performed just with logic, not considering the "role" thing. Small pieces of code being tested (unit tests).
In my mind, a test should check if the code is OK, and in order to do that, using a real "HTTP request" is the best way of achieving it (for a WebServer).
However, I have noticed that this practice is not well "recommended", I can't say why, but in most Frameworks that I have used, you couldn't test it with real requests, without hacking into something.
Is it something TERRIBLY BAD, or just "ok"? I mean, why mocking up something that can be tested for real...
I'm not sure this is covered by other questions, but if so, I couldn't find it.

How to handle should.js assert error

How should one handle uncaught exception thrown by a should.js (or node.js) failed assertion and keep execution on the same function/block where the assertion failed?
I tried wrapping the assertion in a try/catch but it seems to go up to process.on('uncaughtexception') anyway.
Lastly, is it a good practice and performant to use assertions in your production code to validate object
properties?
Thanks!
As the documentation states, Node's assert is basically for unit testing. Therefore I wouldn't use it in production code. I prefer unit testing to make sure that assertions are true in several situations.
However, I think you are using assert in a wrong way here: If an assertion fails something is wrong. Your app is in some kind of unknown state.
If you have some kind of handling for invalid objects assert isn't the right tool: As far as I understand your usecase you don't really require an object to be valid, but want to do something different if it's not. That's a simple condition, not an assertion.
Hi #dublx I think there are perfectly valid use cases to use assertions in the production code. E.g. if you rely on an external API that you know to behave in a certain way. This API might change suddenly and break your code. If an assertion would detect that the API changed and you would get an automatic e-mail you then could fix it even before your customers will recognize the break.
That said I recommend assume.js which solves exactly your problem. Even the performance is splendid: One assertion eats just 17µs or 0.017ms.

Should I switch from Vows to Mocha?

I'm trying to decide whether to switch from Vows to Mocha for a large Node app.
I've enjoyed almost all of the Vows experience - but there is just something strange about the argument passing. I always have to scratch my head to remember how topics work, and that interferes with the basics of getting the tests written. It is particularly problematic on deeply nested asynchronous tests. Though I find that combining Vows with async.js can help a little.
So Mocha seems more flexible in its reporting. I like the freedom to choose the testing style & importantly it runs in the browser too, which will be very useful. But I'm worried that it still doesn't solve the readability problem for deeply nested asynchronous tests.
Does anyone have any practical advice - can Mocha make deeply nested tests readable? Am I missing something?
Mocha is ace. It provides a done callback, rather than waitsFor that jasmine provides. I cant speak about migration from vows but from jasmine it was straight forward. Inside you mocha test function you can use async if you want (or Seq etc.. if you want to be legacy) though if you required nested callbacks at that point its an integration test, which might make you think about the granularity of your tests.
OT: 100% test coverage rarely delivers any value.
Deeply nested tests are solved by using flow control in your unit test suite.
Vows does not allow this easily because the exports style requires creating flow control libraries that support this.
Either write a flow control library for vows or switch to mocha and re-use an existing flow control library.

Resources