Service Fabric node.js guest application express.js server EADDRINUSE - node.js

Not sure if this is a service fabric issue, or issue with node.js.
Basically this is my problem. I deploy the node.js application, it works fine. I redeploy the node application it fails to work, with the server returning EADDRINUSE. When I run netstat -an the port isn't in use. It's as if node is still running somewhere, some how, but not appearing in tasklist etc..
Anyone got any ideas?

Not entirely sure, but I believe this is because the server I was using (express.js), or rather node, was not shutting down and closing existing connections causing windows to think the ports are still in use. At least, that's how it seems.
I can not find it "officially" documented, but from this (quoted below) it reads SF sends SIGINT to the application to attempt to end it before killing it.
The following code appears to fix my issue:
var app = express();
var server = app.listen(17500);
if (process.platform === "win32") {
var rl = require("readline").createInterface({
input: process.stdin,
output: process.stdout
});
rl.on("SIGINT", function () {
process.emit("SIGINT");
}
}
process.on("SIGINT", function() {
server.close(function () {
process.exit(0);
});
});
For Linux nodes, I suppose you'd want to listen for "SIGTERM" as well.
I would like to know if there's any sort of remediation for this though, in the previously mentioned scenario the VMSS was completely unusable -- I could not deploy, nor run, a node web server. How does one restart the cluster without destroying it and recreating it? I now realise you can't just restart VMSS instances willy-nilly because service fabric completely breaks if you do that, apparently irrevocably
Rajeet Nair [RajeetN#MSFT]
Service Fabric also sends a Ctrl-C to service processes and waits for service to terminate. If the service doesn't terminate for 3 minutes, the process is killed.

Related

Setinterval running in nodejs auto stops when hosting in Cpanel, It works perfect in localhost

I was running a setInterval in Nodejs at every 1 second interval. But when I hosted it on Cpanel after 30 minutes website inactivity It stops running. But in locahost it works without any issue. I am passing a socket in every second with this setinterval. Why the issue happen?
let intervalId = setInterval(() => {
var io = req.app.get('socketio');
let time = req.body.time++;
console.log(time)
let timerId = intervalId[Symbol.toPrimitive]();
io.to(req.user.email).emit("socketTimer", { intervalId: timerId, time});
}, 1000);
If you are running your NodeJS app through the CloudLinux NodeJS Selector, it might be an issue with the Apache Passenger, which is handling this. You might want to try running your app through SSH as you do on localhost and see if it happens again. If you can't seem to find a solution for that, then I would suggest switching to a NodeJS specific web hosting provider.

Why is my Node.js service stopping unexpectedly under GraalVM?

I have a fairly simple Node.js service that basically just fields a few HTTP requests. This runs fine via the GraalVM node command. However, when I use node --jvm --polyglot service.js My Node service dies shortly after starting. Nothing else in the code has changed.
What is interesting is that the following code seems to kill my Node.js service
const { MongoClient } = require("mongodb")
console.log("got MongoClient")
And when I run Graal Node without --jvm --polyglot everything works fine.
If I comment out the Mongo stuff, running with --jvm --polyglot, everything works fine.
What could possibly be going on where trying to run the MongoDB Node.js driver under GraalVM could be causing problems?
It may not be that it dies, but after starting my HTTP service
const server = app.listen(port, () => console.log(`Server running... test at http://${hostname}:${port}/ping`))
it no longer accepts HTTP requests. ???
The best approach would be to raise an issue on GraalVM's repos, probably on the Graal.js one: https://github.com/oracle/graaljs. It could be a bug.
You can also debug the process and maybe that will reveal additional details of what's happening: https://www.graalvm.org/tools/chrome-debugger/

How should a Node.js microservice survive a Rabbitmq restart?

I've been working on an example of using Rabbitmq for communication between Node.js microservices and I'm trying to understand the best way for these microservices to survive a restart of the Rabbitmq server.
Full example is available on Github: https://github.com/ashleydavis/rabbit-messaging-example
You can start the system up by changing to the broadcast sub-directory and using docker-compose up --build.
With that running I open another terminal and issue the following command to terminate the Rabbit server docker-compose kill rabbit.
This causes a Node.js unhandled exception to kill my sender and receiver microservices that were connected to the Rabbitmq server.
Now I'd like to be able to restart the Rabbitmq server (using docker-compose up rabbit) and have the original microservices come back online.
This is intended to run under Docker-Compose for development and Kubernetes for production. I could just set this up so that the microservices restart when they are terminated by the disconnection from Rabbitmq, but I'd prefer it if the microservices could stay online (they might be doing other work that shouldn't be interrupted) and then reconnect to Rabbitmq automatically when it becomes available again.
Anyone know how to achieve automatic reconnection to Rabbitmq using the ampq library?
Just picking the sender service as an example on how to deal with it.
The error that is causing node to exit is that here is no 'error' handler on the stream the writer users.
If you modify this part of the code.
https://github.com/ashleydavis/rabbit-messaging-example/blob/master/broadcast/sender/src/index.js#L13
Change the line in sender/src/index.js from
const messagingConnection = await retry(() => amqp.connect(messagingHost), 10, 5000);
to
const messagingConnection = await retry(() => amqp.connect(messagingHost), 10, 5000)
.then(x => {
return x.on('error', (err) => {
console.log('connect stream on error', err)
});
});
Just having the error handler means that the node process no longer exists with unhandled exception. This does not make the sender code correct, it now needs to be modified to know if it has a connection, only send data if it has a connection, retry to connect if it has no connection.
A similar fix for the receiver can be applied
This is a useful reference for when node requires setup to not exit.
https://medium.com/dailyjs/how-to-prevent-your-node-js-process-from-crashing-5d40247b8ab2

Ensuring that only a single instance of a nodejs application is running

Is there an elegant way to ensure that only one instance of a nodejs app is running?
I tried to use pidlock npm, however, it seems that it works only on *nix systems.
Is it possible by using mutex?
Thanks
I've just found single-instance library which is intended to work on all platforms. I can confirm that it works well on Windows.
You can install it by npm i single-instance and you need to wrap your application code like this:
const SingleInstance = require('single-instance');
const locker = new SingleInstance('my-app-name');
locker.lock().then(() => {
// Your application code goes here
}).catch(err => {
// This block will be executed if the app is already running
console.log(err); // it will print out 'An application is already running'
});
If I understand its source code correctly, it implements the lock using a socket: if it can connect to a socket, then the application is already running. If it can't connect, then it creates the socket.

How do I prevent node.js from crashing? try-catch doesn't work

From my experience, a php server would throw an exception to the log or to the server end, but node.js just simply crashes. Surrounding my code with a try-catch doesn't work either since everything is done asynchronously. I would like to know what does everyone else do in their production servers.
PM2
First of all, I would highly recommend installing PM2 for Node.js. PM2 is really great at handling crash and monitoring Node apps as well as load balancing. PM2 immediately starts the Node app whenever it crashes, stops for any reason or even when server restarts. So, if someday even after managing our code, app crashes, PM2 can restart it immediately. For more info, Installing and Running PM2
Other answers are really insane as you can read at Node's own documents at http://nodejs.org/docs/latest/api/process.html#process_event_uncaughtexception
If someone is using other stated answers read Node Docs:
Note that uncaughtException is a very crude mechanism for exception handling and may be removed in the future
Now coming back to our solution to preventing the app itself from crashing.
So after going through I finally came up with what Node document itself suggests:
Don't use uncaughtException, use domains with cluster instead. If you do use uncaughtException, restart your application after every unhandled exception!
DOMAIN with Cluster
What we actually do is send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
In this way, domain usage goes hand-in-hand with the cluster module, since the master process can fork a new worker when a worker encounters an error. See the code below to understand what I mean
By using Domain, and the resilience of separating our program into multiple worker processes using Cluster, we can react more appropriately, and handle errors with much greater safety.
var cluster = require('cluster');
var PORT = +process.env.PORT || 1337;
if(cluster.isMaster)
{
cluster.fork();
cluster.fork();
cluster.on('disconnect', function(worker)
{
console.error('disconnect!');
cluster.fork();
});
}
else
{
var domain = require('domain');
var server = require('http').createServer(function(req, res)
{
var d = domain.create();
d.on('error', function(er)
{
//something unexpected occurred
console.error('error', er.stack);
try
{
//make sure we close down within 30 seconds
var killtimer = setTimeout(function()
{
process.exit(1);
}, 30000);
// But don't keep the process open just for that!
killtimer.unref();
//stop taking new requests.
server.close();
//Let the master know we're dead. This will trigger a
//'disconnect' in the cluster master, and then it will fork
//a new worker.
cluster.worker.disconnect();
//send an error to the request that triggered the problem
res.statusCode = 500;
res.setHeader('content-type', 'text/plain');
res.end('Oops, there was a problem!\n');
}
catch (er2)
{
//oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
}
});
//Because req and res were created before this domain existed,
//we need to explicitly add them.
d.add(req);
d.add(res);
//Now run the handler function in the domain.
d.run(function()
{
//You'd put your fancy application logic here.
handleRequest(req, res);
});
});
server.listen(PORT);
}
Though Domain is pending deprecation and will be removed as the new replacement comes as stated in Node's Documentation
This module is pending deprecation. Once a replacement API has been finalized, this module will be fully deprecated. Users who absolutely must have the functionality that domains provide may rely on it for the time being but should expect to have to migrate to a different solution in the future.
But until the new replacement is not introduced, Domain with Cluster is the only good solution what Node Documentation suggests.
For in-depth understanding Domain and Cluster read
https://nodejs.org/api/domain.html#domain_domain (Stability: 0 - Deprecated)
https://nodejs.org/api/cluster.html
Thanks to #Stanley Luo for sharing us this wonderful in-depth explanation on Cluster and Domains
Cluster & Domains
I put this code right under my require statements and global declarations:
process.on('uncaughtException', function (err) {
console.error(err);
console.log("Node NOT Exiting...");
});
works for me. the only thing i don't like about it is I don't get as much info as I would if I just let the thing crash.
As mentioned here you'll find error.stack provides a more complete error message such as the line number that caused the error:
process.on('uncaughtException', function (error) {
console.log(error.stack);
});
Try supervisor
npm install supervisor
supervisor app.js
Or you can install forever instead.
All this will do is recover your server when it crashes by restarting it.
forever can be used within the code to gracefully recover any processes that crash.
The forever docs have solid information on exit/error handling programmatically.
Using try-catch may solve the uncaught errors, but in some complex situations, it won't do the job right such as catching async function. Remember that in Node, any async function calls can contain a potential app crashing operation.
Using uncaughtException is a workaround but it is recognized as inefficient and is likely to be removed in the future versions of Node, so don't count on it.
Ideal solution is to use domain: http://nodejs.org/api/domain.html
To make sure your app is up and running even your server crashed, use the following steps:
use node cluster to fork multiple process per core. So if one process died, another process will be auto boot up. Check out: http://nodejs.org/api/cluster.html
use domain to catch async operation instead of using try-catch or uncaught. I'm not saying that try-catch or uncaught is bad thought!
use forever/supervisor to monitor your services
add daemon to run your node app: http://upstart.ubuntu.com
hope this helps!
Give a try to pm2 node module it is far consistent and has great documentation. Production process manager for Node.js apps with a built-in load balancer. please avoid uncaughtException for this problem.
https://github.com/Unitech/pm2
Works great on restify:
server.on('uncaughtException', function (req, res, route, err) {
log.info('******* Begin Error *******\n%s\n*******\n%s\n******* End Error *******', route, err.stack);
if (!res.headersSent) {
return res.send(500, {ok: false});
}
res.write('\n');
res.end();
});
By default, Node.js handles such exceptions by printing the stack trace to stderr and exiting with code 1, overriding any previously set process.exitCode.
know more
process.on('uncaughtException', (err, origin) => {
console.log(err);
});
UncaughtException is "a very crude mechanism" (so true) and domains are deprecated now. However, we still need some mechanism to catch errors around (logical) domains. The library:
https://github.com/vacuumlabs/yacol
can help you do this. With a little of extra writing you can have nice domain semantics all around your code!

Resources