I'm currently using node-fibers to write synchronous server-side code. I primarily do error handling through try-catch blocks, but there's always a possibility of an error occurring in external libraries or other little bits of asynchronous code. I'm thinking about using the new domains functionality to try to route those errors to the correct request, and I'm wondering if anyone has tried using fibers and domains in the same app.
Depending on how domains work behind the scenes, I can imagine that fibers might break some of the assumptions used to associate async code with the correct domain. Specifically, I'm worried domains might be doing something like the following to track contexts, which could break with fibers since fibers breaks the guarantee that a function will run to completion before any other code runs:
run_in_domain = function(to_run) {
var old_domain = global_domain;
global_domain = new_domain();
try {
to_run();
} finally {
global_domain = old_domain;
}
}
Has anyone successfully or unsuccessfully tried to get fibers and domains to play together?
I have written an article on how node domains work. How Node Domains Work
Basically they work similarly to process.on('uncaughtException').
I can see that the author of node-fibers states that you can use process.on('uncaughtException') to handle exceptions with node-fibers so there shouldn't be an issue. See Handling Uncaught Exceptions in a Fiber
Related
We use an express backend and we recently ran into a strange bug when adding a new middleware to one of our routes. We have been binding the handlers to the route as follows:
app.all('/route', [AsyncFunction1, AsyncFunction2, AsyncFunction3, AsyncFunction4, AsyncFunction5])
I added another Middleware to the route, so now we have:
app.all('/route', [AsyncFunction1, AsyncFunction2, AsyncFunction3, AsyncFunction4, AsyncFunction5, AsyncFunction6])
It worked fine and as expected until we run the containerized version of it (we run containerized versions for production). Suddenly, anything but GET requests dont' work on that route. Hitting it with an OPTIONS request returns "GET, HEAD". Again, it works fine locally.
We tried completely commenting out the new middleware and just have it return next(), but it doesn't work. We've tried removing one of the other, existing middleware, and the endpoint works again. We tried duplicating one of the other, existing middleware with a new name and adding it in place of the new one we made, and we get the same error. We've tried numerous different things, and it seems that the number of middleware is the limiting factor. Everything I've read makes no mention of there being a limit. Even more strangely, it works locally but not when containerized??
We did come up with two fixes. One was to assign the handlers as follows:
app.all('/route', AsyncFunction1)
app.all('/route', AsyncFunction2)
app.all('/route', AsyncFunction3)
.... so on for all 6
This works as expected but is a very low-level change of the sort we'd like to avoid if possible. We ended up moving the logic of the new middleware into one of the existing middlewares. Everything works as expected.
But there's something weird going on here and we don't know what!
The container image uses the same version of node and npm we are running locally. I'm thinking it might be some sort of resource constraint? I don't really know.
Is there any type of limit on the number of handlers added via array?
I am running my integration test cases in separate files for each API.
Before it begins I start the server along with all services, like databases. When it ends, I close all connections. I use Before and After hooks for that purpose. It is important to know that my application depends on an enterprise framework where most "core work" is written and I install it as a dependency of my application.
I run the tests with Mocha.
When the first file runs, I see no problems. When the second file runs I get a lot of errors related to database connections. I tried to fix it in many different ways, most of them failed because of the limitations the Framework imposed me.
Debugging I found out that Mocha actually loads all files first, that means that all code written before the hooks and the describe calls is executed. So when the second file is loaded, the require.cache is already full of modules. Only after that the suite executes the tests sequentially.
That has a huge impact in this Framework because many objects are actually Singletons, so if in a after hook it closes a connection with a database, it closes the connection inside the Singleton. The way the Framework was built makes it very hard to give a workaround to this problem, like reconnecting to all services in the before hook.
I wrote a very ugly code that helps me before I can refactor the Framework. This goes in each test file I want to invalidate the cache.
function clearRequireCache() {
Object.keys(require.cache).forEach(function (key) {
delete require.cache[key];
});
}
before(() => {
clearRequireCache();
})
It is working, but seems to be very bad practice. And I don`t want this in the code.
As a second idea I was thinking about running Mocha multiple times, one for each "module" (as of my Framework) or file.
"scripts": {
"test-integration" : "./node_modules/mocha/bin/mocha ./api/modules/module1/test/integration/*.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file1.integration.js && ./node_modules/mocha/bin/mocha ./api/modules/module2/test/integration/file2.integration.js"
}
I was wondering if Mocha provides a solution for this problem so I can get rid of that code and delay the code refacting a bit.
I am new to node and what I would call, real server-side programming (vs PHP). I was setting up a user database with MongoDB, Mongoose and a simple mongoose user plugin that came with a schema and password stuff to use. You can add validation to Mongoose for your fields like so
schema.path('email').validate(function (email) {
if (this.skipValidation) return true
return email.trim().length
}, 'Please provide a valid email')
(this is not my code). I noticed though when I passed an invalid or blank email, .trim() failed and the entire server crashed. This is very worrisome to me because things like this don't happen in your good ol' WAMP stack. If you have a bug, 99.9% of the time it's just the browser that is affected.
Now that I am delving into lower level programming, do I have to be paranoid about every incoming variable to a simple function? Is there a tried-and-true error system I should follow?
Just check before using the variable with trim, if it is !null for example:
if(!email) {
return false;
}
And if you want to run your app forever, rather use PM2.
If you are interested in running forever, read this interesting post http://devo.ps/blog/goodbye-node-forever-hello-pm2/
You may consider using forever to keep your node.js program running. Even it crashes, it restarts automatically and the error is logged as well.
Note: Although you could actually catch all exceptions to prevent the node.js from crashing, it is not recommended.
One of our strategies is to make use of Node.js Domain to handle errors - http://nodejs.org/api/domain.html
You should set up a error logging node modules like Winston, once configured produces useful error/exceptions.
Have a look in this answer for how to catch error within your node implementation, though specific to expressjs but relevant.
Once you catch exceptions, it prevents unexpected crashes.
I'm currently working on a project where one of the core Node.js modules (dns) does not behave the way I need it to. I've found a module that seems like it would work as a replacement: https://github.com/tjfontaine/node-dns. However, the code using the DNS module is several layers down from the application code I've written. I'm using the Request module (https://github.com/mikeal/request) which makes HTTP requests and uses several core modules to do so. The Request module does not seem to be using the DNS module directly, but I'm assuming one of those core modules is calling the DNS module.
Is there a way I can tell Node to use https://github.com/tjfontaine/node-dns whenever require('dns') is called?
Yes and you should not,
require.cache is a dangerous thing, extremely. It can cause memory leaks if you do not know what you are doing and cache mismatch which is potentially worse. Most requests to change core modules can also result in unintentional side effects (such as discoverability failures with DNS).
You can create a user-space require with something like : https://github.com/bmeck/node-module-system ; however, this faces the same dangers but is not directly tied to core.
My suggestion would be to wrap your require('dns').resolve with require('async').memoize, but be aware that DNS discoverability may fall over.
For better or worse, I've implemented module white lists before doing something as demonstrated below. In your case, it ought to be possible to explicitly check for dns module name and delegate everything else to original require(). However, this implementation assumes that you have full control of when and how your own code is being executed.
var _require = constructMyOwnRequire(/*intercept 'dns' and require something else*/);
var sandbox = Object.freeze({
/* directly import all other globals like setTimeout, setInterval, etc.. */
require : Object.freeze(_require)
});
try {
vm.runInContext(YOUR_SCRIPT, Object.freeze(vm.createContext(sandbox)));
} catch (exception) {
/* stuff */
}
Not really.
require is a core var which is local to each module, so you can't stub it, because Node will give the untouched require var to the loaded module.
You could run those things using the vm module. However, you would have to write too much code to do a "simple workaround" (give all needed variables to the request module, stub the needed ones to work properly, etc, etc...).
I'm working with an automated system written in nodeJS that creates on the fly nodes across the cloud connecting them by the means of the ZMQ binding for nodeJS. Sometimes I get the error Error: Address already in use, which is my bad because I have some bug somewhere. I would like to know if it's possible with the nodeJS binding of ZMQ to check the availability of the address before binding it.
It's not really what I was searching for, but in the end I decided to go for the "simple" solution and use a try-catch block to check if there is an error when binding to a host:port. In practice this is what I do:
try {
receiver.bindSync("tcp://"+host+":" + port);
}
catch (e) {
console.log(e);
}
Which is stupid and straight forward. I was looking for a more accurate way to do this (for example, as mentioned in the question, a function to check the availability of the address, rather than catching the error).