Application(Webinterface + Cron-Jobs + HTTP-Clients) -- JAVA vs NODEJS - node.js

iam programming a prototype application with the following components:
webinterface for admins
doing cron jobs(statistic generation, ..)
interact with other webservices over http
I started programming with nodejs(typescript) and i got the connection to the other services. Now i got a problem with cron-jobs in nodejs.
Iam using node-cron for executing the cronjob.
Inside one job i need to obtain the status of much pc's and make a summary of it. If i would do this, this would block the main thread.
So i think I need to this in a separate thread.
How can i do this in nodejs? Should i use webworker-threads?
Am I on the proper way?
Should i better use Java(Grails/Spring) for this?
I really like the simplicity of nodejs (for http clients, ..)
Hope someone can give me hope that iam on the proper way.

I will just use Node Cluster. Using cluster, a master can create a multiple workers, which means your cron wont block incoming request. Just make sure that only 1 worker doing the Cron.
I have never working with node-cron before, but I have experience with the SyncedCron. But should be the same.
For the http client there are a lot libraries doing this, you can check Request or httpclient.
Your code should look something like this :
var cluster = require('cluster');
var http = require('http');
var numWorkers = require('os').cpus().length-1; // just give 1 cpu for OS to use, or maybe 2
if (cluster.isMaster) {
console.log('Master cluster setting up ' + numWorkers + ' workers...');
var cronPID=null;
for(var i = 0; i < numWorkers; i++) {
var worker=cluster.fork();
if(i==0){
//instructing the first worker to assume role of SyncedCron.
worker.send('you do it!');
cronPID=worker.process.pid;
console.log("worker "+cronPID+" choosed as Cron worker!");
}
}
cluster.on('online', function(worker) {
console.log('Worker ' + worker.process.pid + ' is online');
});
cluster.on('exit', function(worker, code, signal) {
// have to revive the worker
console.log('Worker ' + worker.process.pid + ' died with code: ' + code + ', and signal: ' + signal);
console.log('Starting a new worker');
var newWorker=cluster.fork();
if(cronPID==worker.process.pid)
{ // need to re-elect a new cron worker!
newWorker.send('you do it!');
cronPID=newWorker.process.pid;
console.log("worker "+cronPID+" choosed as Cron worker!");
}
});
}else
{ // worker sides
process.on('message', (msg) => {
// validate your message that you get
// if validated, create a cron job here
});
// create your express below, I assume you use express instead of node's http library
var express = require('express');
var app = express();
app.post...........
}
Note :
To revive the master, use something like "forever"
Your server should have multiple core, at least 4 but I recommend more (8 maybe?).

Related

How to restart if NodeJS API service failed?

I've the similar NodeJS code:
cluster.js
'use strict';
const cluster = require('cluster');
var express = require('express');
const metricsServer = express();
const AggregatorRegistry = require('prom-client').AggregatorRegistry;
const aggregatorRegistry = new AggregatorRegistry();
var os = require('os');
if (cluster.isMaster) {
for (let i = 0; i < os.cpus().length; i++) {
cluster.fork();
}
metricsServer.get('/metrics', (req, res) => {
aggregatorRegistry.clusterMetrics((err, metrics) => {
if (err) console.log(err);
res.set('Content-Type', aggregatorRegistry.contentType);
res.send(metrics);
});
});
metricsServer.listen(3013);
console.log(
'Cluster metrics server listening to 3013, metrics exposed on /metrics'
);
} else {
require('./app.js'); // Here it'll handle all of our API service and it'll run under port 3000
}
As you can see in the above code I'm using NodeJS Manual cluster method instead of PM2 cluster, because I need to monitor my API via Prometheus. I'm usually starting the cluster.js via pm2 start cluster.js, however due to some DB connection our app.js service failed but cluster.js didn't. It apparently looks like I've not handled the db connection error, even though I've not handle it. I want to know,
How can I make sure my app.js and cluster.js always restarts if it crashes?
Is there a Linux crontab can be place to check the certain ports are always running (i.e 3000 and 3013)? (If this a good idea, I appreciate if you could provide me the code, I'm not much familiar with Linux)
Or I can deploy another NodeJS api to check the certain services are running, but since my API's real-time and catching certain amount of load; I'm not much happy do this?
Any help would be appreciate, Thanks in advance.
You can use monit https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-monit in your server to regular monitor your process, if your project crashes it restart it again and can even notify you. but in this you have to do some configuration in server as monit regularly monitors a port, if it dosent get any reply from thta port then it restarts it.
otherwise you can use forever module. Easy to install and easy to use-https://www.npmjs.com/package/forever
it monitors and within 1 sec it restarts your application
I recently found out that, we can listen to the worker event if it's died/closed and restart it accordingly.
Here is the code:
'use strict';
const cluster = require('cluster');
var express = require('express');
const metricsServer = express();
var os = require('os');
if (cluster.isMaster) {
for (let i = 0; i < os.cpus().length; i++) {
cluster.fork();
}
cluster.on(
"exit",
function handleExit( worker, code, signal ) {
console.log( "Worker has died.", worker.process.pid );
console.log( "Death was suicide:", worker.exitedAfterDisconnect );
// If a Worker was terminated accidentally (such as by an uncaught
// exception), then we can try to restart it.
if ( ! worker.exitedAfterDisconnect ) {
var worker = cluster.fork();
// CAUTION: If the Worker dies immediately, perhaps due to a bug in the
// code, you can run [from what I have READ] into rapid CPU consumption
// as Master continually tries to create new Workers.
}
}
);
} else {
require('./app.js');
}

cluster in mongodb and how does it picks up the cores

So I am new to cluster and mongodb and i came across this bunch of code.
#!/usr/bin/env node
var cluster = require('cluster');
var os = require('os');
var app = require('../main')
var models = require("../models");
if(cluster.isMaster){
var mCoreCount = os.cpus().length;
console.log("Cores : ", mCoreCount);
for (var i = 0; i < mCoreCount; i++) {
cluster.fork();
}
cluster.on('exit', function(){
cluster.fork();
});
}else{
models.sequelize.sync().then(function(){
app.listen(app.get('port'), function(){
console.log('api is live.' + app.get('port'));
});
});
}
So when I console I get cores as 4, I tried reading but I could not understand anything , If someone could point me whats going on here It will be a great help.
I understood that the greater the number of cores the node instances will increase , but I guess right now its picking up from my system, what happens in production ?.
This script is trying to get the more efficient way to launch the NodeJS app by creating a fork for each available core on the hardware server.
It picks up the number of core with os.cpus().length
In production, it will append the same process, and the number of fork will depend of the number of available core production server.
Are you really sure the database is MongoDB in both environement ? We can't really tell without seeing the whole app code.

Is it possible to have 2 concurrent node.js POST requests to the same server in flight?

I'm writing some testing code in Node.js that just repeatedly POSTs HTTP requests to a web-server. In simplified form:
function doPost(opts, data) {
var post_req = http.request(opts, function(res) {
res.setEncoding('utf8')
res.on('data', function (chunk) { })
})
post_req.write(JSON.stringify(data))
post_req.end()
}
setInterval(doPost, interval, opts, msg)
I'd prefer that these requests are issued sequentially, i.e. that a subsequent POST was not sent until the first POST received a response.
My question is: due to the non-blocking architecture of the underlying libuv library used by the runtime, is it possible that this code sends one POST out over the connection to the web-server, but then is able to execute another post even if a response from the server has not yet arrived?
If I imagine this with a select() loop, I'd be free to call write() for the second POST and just get EWOULDBLOCK. Or if the network drops, will it just build up a backlog of POST request queued up to the IO thread-pool? It's unclear to me what behavior I should expect in this case. Is there something I must do to enforce completion of a POST before the next POST can start?
Inherintly Node.js runs on a single thread, to run multiple processes, you'll have to run clusters, they are are somewhat akin to multi-threading in Java. (See Node.js documentation on clusters). For example, your code will look something like this:
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', function(worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
}
else {
//call the code in doPost
doPost(opts, data);
}
I think I've found my answer. I ran some tests under packet capture and found that when the network drops it's important to throttle your POST requests otherwise requests get enqueue'd to the IO pool and depending on the state of connectivity, some may send, others may not, and message order is mangled.

NodeJS on multiple processors (PM2, Cluster, Recluster, Naught)

I am investigating options for running node in a multi-core environment.
I'm trying to determine the best approach and so far I've seen these options
Use built in cluster library to spin up works and respond to signals
Use PM but, PM2 -i is listed as beta.
Naught
Recluster
Are there other alternatives? What are folks using in production?
I've been using the default cluster library, and it works very well. I've had over 10,000 concurrents(multiple clusters on multiple servers) and it works very well.
It is suggested to use clusters with domain for error handling.
This is lifted straight from http://nodejs.org/api/domain.html I've mades some changes on how it spawns new clusters for each core of your machine. and got rid of if/else and added express.
var cluster = require('cluster'),
http = require('http'),
PORT = process.env.PORT || 1337,
os = require('os'),
server;
function forkClusters () {
var cpuCount = os.cpus().length;
// Create a worker for each CPU
for (var i = 0; i < cpuCount ; i += 1) {
cluster.fork();
}
}
// Master Process
if (cluster.isMaster) {
// You can also of course get a bit fancier about logging, and
// implement whatever custom logic you need to prevent DoS
// attacks and other bad behavior.
//
// See the options in the cluster documentation.
//
// The important thing is that the master does very little,
// increasing our resilience to unexpected errors.
forkClusters ()
cluster.on('disconnect', function(worker) {
console.error('disconnect!');
cluster.fork();
});
}
function handleError (d) {
d.on('error', function(er) {
console.error('error', er.stack);
// Note: we're in dangerous territory!
// By definition, something unexpected occurred,
// which we probably didn't want.
// Anything can happen now!Be very careful!
try {
// make sure we close down within 30 seconds
var killtimer = setTimeout(function() {
process.exit(1);
}, 30000);
// But don't keep the process open just for that!
killtimer.unref();
// stop taking new requests.
server.close();
// Let the master know we're dead.This will trigger a
// 'disconnect' in the cluster master, and then it will fork
// a new worker.
cluster.worker.disconnect();
} catch (er2) {
// oh well, not much we can do at this point.
console.error('Error sending 500!', er2.stack);
}
});
}
// child Process
if (cluster.isWorker) {
// the worker
//
// This is where we put our bugs!
var domain = require('domain');
var express = require('express');
var app = express();
app.set('port', PORT);
// See the cluster documentation for more details about using
// worker processes to serve requests.How it works, caveats, etc.
var d = domain.create();
handleError(d);
// Now run the handler function in the domain.
//
// put all code here. any code included outside of domain.run will not handle errors on the domain level, but will crash the app.
//
d.run(function() {
// this is where we start our server
server = http.createServer(app).listen(app.get('port'), function () {
console.log('Cluster %s listening on port %s', cluster.worker.id, app.get('port'));
});
});
}
We use Supervisor to manage our Node.JS process's, to start them upon boot, and to act as a watchdog in case the process's crash.
We use Nginx as a reverse-proxy to load balance traffic between the process's that listen to different ports
this way each process is isolated from the other.
for example: Nginx listens on port 80 and forwards traffic to ports 8000-8003
I was using PM2 for quite a while, but their pricing is expensive for my needs as I'm having my own analytics environment and I don't require support, so I decided to experiment alternatives. For my case, just forever made the trick, very simple one actually:
forever -m 5 app.js
Another useful example is
forever start app.js -p 8080

Nodejs cluster isn't response to web browser

I am a newbie in Nodejs and I try something with cluster on nodejs. But I meet a problem:
- I use example on Nodejs API about Cluster:
var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', function(worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
} else {
// Workers can share any TCP connection
// In this case its a HTTP server
http.createServer(function(req, res) {
res.writeHead(200);
res.end("hello world\n");
}).listen(8000);
}
I run above code, but when hit url into browser (localhost:8000), browser doesn't receive any response from nodejs (it's connecting... forever until I kill nodejs ##), I am,however, getting the "online" event to fire.
How can I get this server to respond to requests?
p/s: I try getting event "exit" to respawn new worker. Sometime when I hit enter on browser, console log worker x die and then respawn new worker. But browser still connecting...
http://i.stack.imgur.com/RHSYY.png
Help me :) and sorry my bad english
I install the newest version of nodejs ( v0.10.24)
I had a similar issue with cluster. Try putting the http server creation part inside a setTimeout call. (for example delay it for 1000 ms) you should observe an improvement.
setTimeout(function(){ http.createServer...... }, 1000);
Besides, you may also try to create your server using jxcore mt-keep to see if it works on the similar case.
$ jx mt-keep servercode.js

Resources