I'm trying to learn cluster module and I come across this piece of code I just cant get my mind around it. First it fork childs with child_process module and there it use cluster.fork().process , I've used both cluster module and child_process in an express web-server separately i know cluster module works a load balancer.
But I cant get the idea of using them together. and there also something else, cluster is listening to those worker process and when ever a disconnect and possibly exit event is emitted to master it reforked a process , but here is the question lets assume email worker crashes and the master is going to fork it again how does it know it should fork email ? I mean shouldn't it pass an id which I cant see in this code.
var cluster = require("cluster");
const numCPUs = require("os").cpus().length;
if (cluster.isMaster) {
// fork child process for notif/sms/email worker
global.smsWorker = require("child_process").fork("./smsWorker");
global.emailWorker = require("child_process").fork("./emailWorker");
global.notifiWorker = require("child_process").fork("./notifWorker");
// fork application workers
for (var i = 0; i < numCPUs; i++) {
var worker = cluster.fork().process;
console.log("worker started. process id %s", worker.pid);
}
// if application worker gets disconnected, start new one.
cluster.on("disconnect", function(worker) {
console.error("Worker disconnect: " + worker.id);
var newWorker = cluster.fork().process;
console.log("Worker started. Process id %s", newWorker.pid);
});
} else {
callback(cluster);
}
but here is the question lets assume email worker crashes and the
master is going to fork it again how does it know it should fork email
? I mean shouldn't it pass an id which I cant see in this code.
The disconnect event it is listening to comes from the cluster-specific code, not a generic process listener. So, that disconnect event only fires when one of the cluster child processes exits. If you have some other child processes processing email, then when one of those crashes, it would not trigger this disconnect event. You would have to monitor that child_process yourself separately from within the code that started it.
You can see where the monitoring is for the cluster.on('disconnect', ...) event here in the cluster source code.
Also, I should mention that the cluster module is when you want pure horizontal scaling where all new processes are sharing the exact same work, each taking new incoming connections in turn. The cluster module is not for firing up a specific worker to carry out a specific task. For that, you would use either the Worker Threads module (to fire up a thread) or the child_process module (to fire up a new child process with a specific purpose)
Hi I'm learning nodejs and I'm bit more confused with cluster module, Okay to the point, Master creates workers, in my case I'm using 32 bit windows operating system, so I'm provided with "2 workers". by considering the following simple program
var cluster = require('cluster');
var os = require('os');
var numCPUs = os.cpus().length;
console.log("start");
if (cluster.isMaster) {
for (var i = 0; i < numCPUs; ++i) {
cluster.fork();
}
}
console.log(cluster.isMaster? "I'm Master":"I'm worker");
Output
start
I'm Master
start
I'm worker
start
I'm worker
By googling I found Master will create worker and allocate the incoming request to the available worker. Here my question is, if two workers are available for all time then every user request will be handled twice?, Thanks in advance
The cluster module handles requests and routes them to a single worker.
Only one worker will ever receive a single request, even if every worker is available all of the time.
Sources and good reading material: http://stackabuse.com/setting-up-a-node-js-cluster/ and https://nodejs.org/api/cluster.html
I'm currently trying to use Node.js Kue for processing jobs in a queue, but I believe I'm not doing it right.
Indeed the way I'm working now, I have two different services (which in this case I'm running with Docker Compose): one Web API built with Express with sends jobs to the queue and one processing module. The issue here is with the processing module.
I've coded it as follows:
var kue = require('kue');
var config = require('./config');
var queue = kue.createQueue({
prefix: config.redis.queuePrefix,
redis: {
port: config.redis.port,
host: config.redis.host
}
});
queue.process('jobType', function (job, done) {
// do processing here...
});
When we run this with Node, it sits there waiting for things to be placed on the queue to do the processing.
There are two issues however:
It needs that Redis be available before running this module. If we run this without Redis already available, it crashes because the host is not accessible and ends the process.
If Redis suddenly becomes unavailable, the processing module also crashes because it cannot stablish the connection and the process is killed.
How can I avoid these problems?
My guess is that I should somehow make the code "wait" for Redis, but I have no idea on how to do this.
How can this be done in this case?
You can use promise to wait until redis is loaded. Then run your module.
loadRedis().then(() => {
//load your module
})
Or you can use generator to "stop" until redis is loaded.
function*(){
const redisLoaded = yield loadRedis();
//load your module
}
Currently, I'm faced with the task where I must scale a Node.js app using Amazon EC2. From what I understand, the way to do this is to have each child server use all available processes using cluster, and have sticky connections to ensure that every user connecting to the server is "remembered" as to what worker they're data is currently on from previous sessions.
After doing this, the next best move from what I know is to deploy as many servers as needed, and use nginx to load balance between all of them, again using sticky connections to know which "child" server that each users data is on.
So when a user connects to the server, is this what happens?
Client connection -> Find/Choose server -> Find/Choose process -> Socket.IO handshake/connection etc.
If not, please allow me to better understand this load balancing task. I also do not understand the importance of redis in this situation.
Below is the code I'm using to use all CPU's on one machine for a seperate Node.js process:
var express = require('express');
cluster = require('cluster'),
net = require('net'),
sio = require('socket.io'),
sio_redis = require('socket.io-redis');
var port = 3502,
num_processes = require('os').cpus().length;
if (cluster.isMaster) {
// This stores our workers. We need to keep them to be able to reference
// them based on source IP address. It's also useful for auto-restart,
// for example.
var workers = [];
// Helper function for spawning worker at index 'i'.
var spawn = function(i) {
workers[i] = cluster.fork();
// Optional: Restart worker on exit
workers[i].on('exit', function(worker, code, signal) {
console.log('respawning worker', i);
spawn(i);
});
};
// Spawn workers.
for (var i = 0; i < num_processes; i++) {
spawn(i);
}
// Helper function for getting a worker index based on IP address.
// This is a hot path so it should be really fast. The way it works
// is by converting the IP address to a number by removing the dots,
// then compressing it to the number of slots we have.
//
// Compared against "real" hashing (from the sticky-session code) and
// "real" IP number conversion, this function is on par in terms of
// worker index distribution only much faster.
var worker_index = function(ip, len) {
var s = '';
for (var i = 0, _len = ip.length; i < _len; i++) {
if (ip[i] !== '.') {
s += ip[i];
}
}
return Number(s) % len;
};
// Create the outside facing server listening on our port.
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var worker = workers[worker_index(connection.remoteAddress, num_processes)];
worker.send('sticky-session:connection', connection);
}).listen(port);
} else {
// Note we don't use a port here because the master listens on it for us.
var app = new express();
// Here you might use middleware, attach routes, etc.
// Don't expose our internal server to the outside.
var server = app.listen(0, 'localhost'),
io = sio(server);
// Tell Socket.IO to use the redis adapter. By default, the redis
// server is assumed to be on localhost:6379. You don't have to
// specify them explicitly unless you want to change them.
io.adapter(sio_redis({ host: 'localhost', port: 6379 }));
// Here you might use Socket.IO middleware for authorization etc.
console.log("Listening");
// Listen to messages sent from the master. Ignore everything else.
process.on('message', function(message, connection) {
if (message !== 'sticky-session:connection') {
return;
}
// Emulate a connection event on the server by emitting the
// event with the connection the master sent us.
server.emit('connection', connection);
connection.resume();
});
}
I believe your general understanding is correct, although I'd like to make a few comments:
Load balancing
You're correct that one way to do load balancing is having nginx load balance between the different instances, and inside each instance have cluster balance between the worker processes it creates. However, that's just one way, and not necessarily always the best one.
Between instances
For one, if you're using AWS anyway, you might want to consider using ELB. It was designed specifically for load balancing EC2 instances, and it makes the problem of configuring load balancing between instances trivial. It also provides a lot of useful features, and (with Auto Scaling) can make scaling extremely dynamic without requiring any effort on your part.
One feature ELB has, which is particularly pertinent to your question, is that it supports sticky sessions out of the box - just a matter of marking a checkbox.
However, I have to add a major caveat, which is that ELB can break socket.io in bizarre ways. If you just use long polling you should be fine (assuming sticky sessions are enabled), but getting actual websockets working is somewhere between extremely frustrating and impossible.
Between processes
While there are a lot of alternatives to using cluster, both within Node and without, I tend to agree cluster itself is usually perfectly fine.
However, one case where it does not work is when you want sticky sessions behind a load balancer, as you apparently do here.
First off, it should be made explicit that the only reason you even need sticky sessions in the first place is because socket.io relies on session data stored in-memory between requests to work (during the handshake for websockets, or basically throughout for long polling). In general, relying on data stored this way should be avoided as much as possible, for a variety of reasons, but with socket.io you don't really have a choice.
Now, this doesn't seem too bad, since cluster can support sticky sessions, using the sticky-session module mentioned in socket.io's documentation, or the snippet you seem to be using.
The thing is, since these sticky sessions are based on the client's IP, they won't work behind a load balancer, be it nginx, ELB, or anything else, since all that's visible inside the instance at that point is the load balancer's IP. The remoteAddress your code tries to hash isn't actually the client's address at all.
That is, when your Node code tries to act as a load balancer between processes, the IP it tries to use will just always be the IP of the other load balancer, that balances between instances. Therefore, all requests will end up at the same process, defeating cluster's whole purpose.
You can see the details of this issue, and a couple of potential ways to solve it (none of which particularly pretty), in this question.
The importance of Redis
As I mentioned earlier, once you have multiple instances/processes receiving requests from your users, in-memory storage of session data is no longer sufficient. Sticky sessions are one way to go, although other, arguably better solutions exist, among them central session storage, which Redis can provide. See this post for a pretty comprehensive review of the subject.
Seeing as your question is about socket.io, though, I'll assume you probably meant Redis's specific importance for websockets, so:
When you have multiple socket.io servers (instances/processes), a given user will be connected to only one such server at any given time. However, any of the servers may, at any time, wish to emit a message to a given user, or even a broadcast to all users, regardless of which server they're currently under.
To that end, socket.io supports "Adapters", of which Redis is one, that allow the different socket.io servers to communicate among themselves. When one server emits a message, it goes into Redis, and then all servers see it (Pub/Sub) and can send it to their users, making sure the message will reach its target.
This, again, is explained in socket.io's documentation regarding multiple nodes, and perhaps even better in this Stack Overflow answer.
I'm developing a node.js app and I am in need of heavy Redis usage. The app will be clustered across 8 CPU cores.
Right now I have 100 concurrent connections to Redis because every worker per CPU has several modules running require('redis').createClient().
Scenario A:
file1.js:
var redis = require('redis').createClient();
file2.js
var redis = require('redis').createClient();
SCENARIO B:
redis.js
var redis = require('redis').createClient();
module.exports = redis;
file1.js
var redis = require('./redis');
file2.js
var redis = require('./redis');
Which approach is better: creating new Redis instance in every new file I introduce (scenario A) or creating one Redis connection globally (scenario B) and sharing this connection across all modules I have. What are drawbacks/benefits of each solution?
Thanks in advance!
When I face a question such as this I generally think about three basic questions.
Which is more readable?
Which allows better code reuse?
Which is more efficient?
Not necessarily in this order as it depends on the scenario, but I believe in this case all three of these questions are in favor of option B.
If you ever needed to modify options for createClient, you would then need to edit them in every file which uses it. Which in option A is every file which uses redis, and option B is just redis.js. Also if a newer or different product comes out and you want to replace redis It would be feasible to make redis.js a wrapper for a different package or even a newer redis client substantially cutting down conversion time.
Globals are generally a bad thing, but in this example redis.js should not be storing mutable state, so there is no problem having a global/singleton in this context.
Both Node and Redis can handle lots of connections pretty well, so that's not a problem.
In your situation, you're creating Redis connections at the startup of your application, so the number of connections you're setting up is limited (in the sense that after your application is started, the number of connections will be constant).
Situations where you'd want to reuse the same connection is in highly dynamic situations, for instance with an HTTP-server where you need to query Redis for every request. Creating a new connection for each request would be a waste of resources (creating and destroying connections all the time) and reusing one connection for each request would be preferable.
As for which of the two scenario's I'd prefer, I'm leaning towards Scenario A myself.
You can create file to handle connection and functions with redis
redis-con.js
const redis = require('redis');
let redisClient;
(async () => {
redisClient = redis.createClient();
redisClient.on("error", (error) => console.error(`Error Redis: ${error}`));
await redisClient.connect();
})();
module.exports = redisClient;
Then you need to create function to handle set, get and del.
now, just import the connection