Is there an elegant way to ensure that only one instance of a nodejs app is running?
I tried to use pidlock npm, however, it seems that it works only on *nix systems.
Is it possible by using mutex?
Thanks
I've just found single-instance library which is intended to work on all platforms. I can confirm that it works well on Windows.
You can install it by npm i single-instance and you need to wrap your application code like this:
const SingleInstance = require('single-instance');
const locker = new SingleInstance('my-app-name');
locker.lock().then(() => {
// Your application code goes here
}).catch(err => {
// This block will be executed if the app is already running
console.log(err); // it will print out 'An application is already running'
});
If I understand its source code correctly, it implements the lock using a socket: if it can connect to a socket, then the application is already running. If it can't connect, then it creates the socket.
Related
I have a fairly simple Node.js service that basically just fields a few HTTP requests. This runs fine via the GraalVM node command. However, when I use node --jvm --polyglot service.js My Node service dies shortly after starting. Nothing else in the code has changed.
What is interesting is that the following code seems to kill my Node.js service
const { MongoClient } = require("mongodb")
console.log("got MongoClient")
And when I run Graal Node without --jvm --polyglot everything works fine.
If I comment out the Mongo stuff, running with --jvm --polyglot, everything works fine.
What could possibly be going on where trying to run the MongoDB Node.js driver under GraalVM could be causing problems?
It may not be that it dies, but after starting my HTTP service
const server = app.listen(port, () => console.log(`Server running... test at http://${hostname}:${port}/ping`))
it no longer accepts HTTP requests. ???
The best approach would be to raise an issue on GraalVM's repos, probably on the Graal.js one: https://github.com/oracle/graaljs. It could be a bug.
You can also debug the process and maybe that will reveal additional details of what's happening: https://www.graalvm.org/tools/chrome-debugger/
I am running a nodeJS application using forever npm module.
Node application also connects to Redis DB for cache check. Quite often the API stops working with the following error on the forever log.
{ ReplyError: Ready check failed: ERR max number of clients reached
at parseError (/home/myapp/core/node_modules/redis/node_modules/redis-parser/lib/parser.js:193:12)
at parseType (/home/myapp/core/node_modules/redis/node_modules/redis-parser/lib/parser.js:303:14)
at JavascriptRedisParser.execute (/home/myapp/ecore/node_modules/redis/node_modules/redis-parser/lib/parser.js:563:20) command: 'INFO', code: 'ERR' }
when I execute the client list command on the redis server it shows too many open connections. I have also set the timeout = 3600 in my Redis configuration.
I do not have any unclosed Redis connection object on my application code.
This happens once or twice in a week depending on the application load, as a stop gap solution I am restarting the node server( it works ).
What could be the permanent solution in this case?
I have figured out why. This has nothing to do with Redis. Increasing the OS file descriptor limit was just a temporary solution. I was using Redis in a web application and the connection was created for every new request.
When the server was restarted occasionally, all the held-up connections by the express server were released.
I solved this by creating a global connection object and re-using the same. The new connection is created only when necessary.
You could do so by creating a global connection object, make a connection once, and make sure it is connected before every time you use that. Check if there is an already coded solution depending on your programming language. In my case it was perl with dancer framework and I used a module called Dancer2::Plugin::Redis
redis_plugin
Returns a Dancer2::Plugin::Redis instance. You can use redis_plugin to
pass the plugin instance to 3rd party modules (backend api) so you can
access the existing Redis connection there. You will need to access
the actual methods of the the plugin instance.
In case if you are not running a web-server and you are running a worker process or any background job process, you could do this simple helper function to re-use the connection.
perl example
sub get_redis_connection {
my $redis = Redis->new(server => "www.example.com:6372" , debug => 0);
$redis->auth('abcdefghijklmnop');
return $redis;
}
...
## when required
unless($redisclient->ping) {
warn "creating new redis connection";
$redisclient = get_redis_connection();
}
I was running into this issue in my chat app because I was creating a new Redis instance each time something connected rather than just creating it once.
// THE WRONG WAY
export const getRedisPubSub = () => new RedisPubSub({
subscriber: new Redis(REDIS_CONNECTION_CONFIG),
publisher: new Redis(REDIS_CONNECTION_CONFIG),
});
and where I wanted to use the connection I was calling
// THE WRONG WAY
getNewRedisPubsub();
I fixed it by just creating the connection once when my app loaded.
export const redisPubSub = new RedisPubSub({
subscriber: new Redis(REDIS_CONNECTION_CONFIG),
publisher: new Redis(REDIS_CONNECTION_CONFIG),
});
and then I passed the one-time initialized redisPubSub object to my createServer function.
It was this article here that helped me see my error: https://docs.upstash.com/troubleshooting/max_concurrent_connections
I've been working on an example of using Rabbitmq for communication between Node.js microservices and I'm trying to understand the best way for these microservices to survive a restart of the Rabbitmq server.
Full example is available on Github: https://github.com/ashleydavis/rabbit-messaging-example
You can start the system up by changing to the broadcast sub-directory and using docker-compose up --build.
With that running I open another terminal and issue the following command to terminate the Rabbit server docker-compose kill rabbit.
This causes a Node.js unhandled exception to kill my sender and receiver microservices that were connected to the Rabbitmq server.
Now I'd like to be able to restart the Rabbitmq server (using docker-compose up rabbit) and have the original microservices come back online.
This is intended to run under Docker-Compose for development and Kubernetes for production. I could just set this up so that the microservices restart when they are terminated by the disconnection from Rabbitmq, but I'd prefer it if the microservices could stay online (they might be doing other work that shouldn't be interrupted) and then reconnect to Rabbitmq automatically when it becomes available again.
Anyone know how to achieve automatic reconnection to Rabbitmq using the ampq library?
Just picking the sender service as an example on how to deal with it.
The error that is causing node to exit is that here is no 'error' handler on the stream the writer users.
If you modify this part of the code.
https://github.com/ashleydavis/rabbit-messaging-example/blob/master/broadcast/sender/src/index.js#L13
Change the line in sender/src/index.js from
const messagingConnection = await retry(() => amqp.connect(messagingHost), 10, 5000);
to
const messagingConnection = await retry(() => amqp.connect(messagingHost), 10, 5000)
.then(x => {
return x.on('error', (err) => {
console.log('connect stream on error', err)
});
});
Just having the error handler means that the node process no longer exists with unhandled exception. This does not make the sender code correct, it now needs to be modified to know if it has a connection, only send data if it has a connection, retry to connect if it has no connection.
A similar fix for the receiver can be applied
This is a useful reference for when node requires setup to not exit.
https://medium.com/dailyjs/how-to-prevent-your-node-js-process-from-crashing-5d40247b8ab2
Anytime I run Firebase Realtime Database code from Node, using the Admin SDK, the process hangs. For example, I might have a node application deactivate.js:
const admin = require("firebase-admin");
// initialize app code...
admin.database().ref(`users/${userId}/active`).set(false)
I run with node deactivate.js. The user will be set to inactive, so that's good. But then the node process will just hang. I have to press ctrl-c to get back to a prompt.
Am I supposed to close connections or do something else in a Node application using Firebase? With Firebase Functions, I do have to return the promise generated from the above call. So, does Firebase Functions automatically handle closing whatever it is I now need to manually handle?
The way you deal with Cloud Functions is not at all the way you deal with standalone node processes. When using the Admin SDK to access Realtime Database in a standalone node process that must finish, do something like this instead to make sure the process exits when all work is complete:
admin.database().ref(`users/${userId}/active`).set(false)
.then(() => {
process.exit(0)
})
.catch((err) => {
console.error(err)
process.exit(1)
})
Not sure if this is a service fabric issue, or issue with node.js.
Basically this is my problem. I deploy the node.js application, it works fine. I redeploy the node application it fails to work, with the server returning EADDRINUSE. When I run netstat -an the port isn't in use. It's as if node is still running somewhere, some how, but not appearing in tasklist etc..
Anyone got any ideas?
Not entirely sure, but I believe this is because the server I was using (express.js), or rather node, was not shutting down and closing existing connections causing windows to think the ports are still in use. At least, that's how it seems.
I can not find it "officially" documented, but from this (quoted below) it reads SF sends SIGINT to the application to attempt to end it before killing it.
The following code appears to fix my issue:
var app = express();
var server = app.listen(17500);
if (process.platform === "win32") {
var rl = require("readline").createInterface({
input: process.stdin,
output: process.stdout
});
rl.on("SIGINT", function () {
process.emit("SIGINT");
}
}
process.on("SIGINT", function() {
server.close(function () {
process.exit(0);
});
});
For Linux nodes, I suppose you'd want to listen for "SIGTERM" as well.
I would like to know if there's any sort of remediation for this though, in the previously mentioned scenario the VMSS was completely unusable -- I could not deploy, nor run, a node web server. How does one restart the cluster without destroying it and recreating it? I now realise you can't just restart VMSS instances willy-nilly because service fabric completely breaks if you do that, apparently irrevocably
Rajeet Nair [RajeetN#MSFT]
Service Fabric also sends a Ctrl-C to service processes and waits for service to terminate. If the service doesn't terminate for 3 minutes, the process is killed.