How to handle should.js assert error - node.js

How should one handle uncaught exception thrown by a should.js (or node.js) failed assertion and keep execution on the same function/block where the assertion failed?
I tried wrapping the assertion in a try/catch but it seems to go up to process.on('uncaughtexception') anyway.
Lastly, is it a good practice and performant to use assertions in your production code to validate object
properties?
Thanks!

As the documentation states, Node's assert is basically for unit testing. Therefore I wouldn't use it in production code. I prefer unit testing to make sure that assertions are true in several situations.
However, I think you are using assert in a wrong way here: If an assertion fails something is wrong. Your app is in some kind of unknown state.
If you have some kind of handling for invalid objects assert isn't the right tool: As far as I understand your usecase you don't really require an object to be valid, but want to do something different if it's not. That's a simple condition, not an assertion.

Hi #dublx I think there are perfectly valid use cases to use assertions in the production code. E.g. if you rely on an external API that you know to behave in a certain way. This API might change suddenly and break your code. If an assertion would detect that the API changed and you would get an automatic e-mail you then could fix it even before your customers will recognize the break.
That said I recommend assume.js which solves exactly your problem. Even the performance is splendid: One assertion eats just 17µs or 0.017ms.

Related

Get test result in Spock "cleanup" method

Is it possible in cleanup method in Spock check is feature (or even better - current iteration of feature) passed or failed? In java's JUnit/TestNG/Cucumber it can be done in one line. But what about Spock?
I've found similar questions here:
Find the outcome/status of a test in Specification.cleanup()
Execute some action when Spock test fails
But both seems to be overcomplicated and it was years ago. Is there any better solution?
Thanks in advance
Update: main goal is to save screenshots and perform some additional actions for failed tests only in my geb/spock project
It is not over-complicated IMO, it is a flexible approach to hooking into events via listeners and extensions. The cleanup: block is there to clean up test fixtures, as the name implies. Reporting or other things based on the test result are to be done in a different way.
Having said that, the simple and short answer to your question is: This still is the canonical way to do that. By the way, you didn't tell us what you want to do with the test result in the clean-up block. This kind of thing - explaining how you want to do something but not explaining why (i.e. which problem you are trying to solve) is called the XY problem.

Python test method for additional method call

I have a situation and i could not find anything online that would help.
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail and alert the developers to go rectify the difference.
I have a method that calls 4 other methods from other classes. Patching made it real easy for me to determine if a method has been called. However, let's say someone in my team decides to add a 5th method, the test will still pass. Assuming that no other method calls should be allowed inside, is there a way to test in python to make sure no other calls are made? Refer to example.py below:
example.py:
def example():
classA.method1()
classB.method2()
classC.method3()
classD.method4()
classE.method5() # we do not want this method in here, test should fail if it detects a 5th or more method.
Is there anyway to cause the test case to fail if any additional methods are added?
You can easily test (with mock or doing the mocking manually) that example() does not specifically calls classE.method5, but that's about all you can expect - it won't work (unless explicitely tested too) for ie classF.method6(). Such a test would require either parsing the example function's source code or analysing it's bytecode representation.
This being said:
my understanding is that python testing is rigorous to ensure that if someone changes a method, the test would fail
I'm afraid your understanding is a bit off - it's not about "changing the method", it's about "unexpectedly changing behaviour". IOW you should first test for behaviour (black box testing), not for implementation (white box testing). Now the distinction between "implementation" and "behaviour" can be a bit blurry depending on the context (you can consider that "calling X.y()" is part of the expected behaviour and it sometimes makes sense indeed), but the distinction is still important.
wrt/ your current use case (and without more context - ie why shouldn't the function call anything else ?), I'd personnaly wouldn't bother trying be that defensive and I'd just clearly document this requirement as a comment in the example() function itself so anyone editing this code immediatly knows what he should not do.

if and when try/except statements are overkill

Sorry if this isn't the right place to ask this. I'm still learning a lot about good design. I was just wondering, say I process raw data through 20 functions. Is it idiotic or extremely slow to think of wrapping the contents of each function with a try/except statement, so if I ever run into issues I can see exactly where and why the data wasn't properly processed? Surely there's another more efficient way of facilitating the debugging process.
I've tried searching through articles for if and when to use try/except statements. But I think the experience of some of the guys on stack overflow will provide a much better answer :)
I can only give my personal opinion, but i think you shouldn't wrap your entire code inside "try/except" conditions. To me, theses are meant for specific cases, when manipulating streams, sending HTTP request, to ensure that we don't reach a part of the code that won't run (or adopt specific behaviour depending on the error).
The risk is to catch an error from another line of your program, but without knowing it (for example if you wrap an entire function).
It is important to cover your code, but without completely hide every error that you could encounter.
you probably already checked it but a little reminders of good practices :
Try / Except good practices
I hope that's will be helpful !
When exceptions are raised (and recorded somewhere) they have a stacktrace showing the calls that lead to the error. That should be enough to trace where the problem was.
If you catch an exception at the lowest level, how will the subsequent methods continue? They'll not get the returned values they were expecting. Better to let the exception propagate up the process to somewhere it makes sense to handle it. If you do manual checks you can raise specific exceptions with messages to help debug, eg:
def foo(bar):
if bar < 0:
raise ValueError(f"Can't foo a value less than 0, got {bar}")
# foo bar here

How to test a catch clause if I can't reproduce the error during the test?

I'm using Jest ans Supertest in my Server Side Application. I was hoping to increase the coverage of my test by obviously testing every uncovered line, until I got blocked by this:
Activity.find()
.then((data) => {
res.json(data);
}).catch(e => res.status(400).send(e));
How can I test the catch clause if I can't reproduce that kind of error in my tests (or at least I don't know how)?
In unit-test your should test only your code and should not rely on 3rd party libraries. Of course, this advice should be applied reasonably (e.g. we must depend on the compiler and out-of-box libraries of the platform we are using).
Up to now it seemed reasonable not to stub json as it's a quite established library you can rely on. But as soon as you want test such behavior as thrown exceptions there is no other way than create a stub, inject it somehow into your code, and in test setup make it throw an exception when calling json.
As a side effect, this will decouple your code from a particular implementation of json and make your code more flexible. Of course decoupling from JSON implementations seems a bit over the top, but nevertheless shows how test-driven design works in this case (even though the code was first).

Should my Node.js tests use HTTP requests

I have always thought in "Tests" as testing the role thing. I have seen that in many (not to say all) cases, tests are performed just with logic, not considering the "role" thing. Small pieces of code being tested (unit tests).
In my mind, a test should check if the code is OK, and in order to do that, using a real "HTTP request" is the best way of achieving it (for a WebServer).
However, I have noticed that this practice is not well "recommended", I can't say why, but in most Frameworks that I have used, you couldn't test it with real requests, without hacking into something.
Is it something TERRIBLY BAD, or just "ok"? I mean, why mocking up something that can be tested for real...
I'm not sure this is covered by other questions, but if so, I couldn't find it.

Resources