Multithread request handling in node.js (deployed in Kubernetes and behind Nginx) - node.js

I have a single-threaded, web-based, CPU intensive workload implemented as a Node.js server (express) and being deployed on Kubernetes with no CPU requests/limits (best effort). This workload, on average, takes ~700-800ms to execute on a quad-core physical machine. The server is behind an Nginx load-balancer (all with default configuration). In short, the workload is simply as follows:
for (let $i = 0; $i < 100; $i++) {
const prime_length = 100;
const diffHell = crypto.createDiffieHellman(prime_length);
const key = diffHell.generateKeys('base64');
checksum(key);
}
I have an event handler in my Express app.js that logs a timestamp in console when it recieves or send a request, as follows:
app.use((req, res, next) => {
const start = process.hrtime();
console.log(`Received ${req.method} ${req.originalUrl} from ${req.headers['referer']} at {${now()}} [RECEIVED]`)
res.on('close', () => {
const durationInMilliseconds = helper.getDurationInMilliseconds(start);
console.log(`Closed received ${req.method} ${req.originalUrl} from ${req.headers['referer']} {${now()}} [CLOSED] ${durationInMilliseconds.toLocaleString()} ms`)
});
next();
})
I'm sending 3 parallel requests to this service from 3 different physical machines at the same time. All these servers, plus all kubernetes nodes has an NTP enabled and their local times are synced together.
To run the traffic, I ssh into all these 3 servers in separate screen (using linux's screen), prepare the curl command in the command line, and then send and Enter using the following command to run the traffic all at the same time:
screen -S 12818.3 -X stuff "
" & screen -S 12783.2 -X stuff "
" & screen -S 12713.1 -X stuff "
"
From the logs, I can see all 3 requests is being sent at the same time: at 17:26:37.888
Interestingly, the server receives each request right after finishing the next one:
Request 1 is received at 17:26:37.922040382 and takes 740.128ms to
process
Request 2 is received at 17:26:38.663390107 and takes 724.524ms to
process
Request 3 is received at 17:26:39.388508923 and takes 695.894ms to
process
Here is the generated logs in the container (extracted using kubectl logs -l name=s1 --tail=9999999999 --timestamps):
2020-10-11T17:26:37.922040382Z Received GET /s1/cpu/100 from undefined at {1602429997921393902} [RECEIVED]
2020-10-11T17:26:38.662193765Z Closed received GET /s1/cpu/100 from undefined {1602429998661523611} [CLOSED] 740.128 ms
2020-10-11T17:26:38.663390107Z Received GET /s1/cpu/100 from undefined at {1602429998662810195} [RECEIVED]
2020-10-11T17:26:39.387987847Z Closed received GET /s1/cpu/100 from undefined {1602429999387339320} [CLOSED] 724.524 ms
2020-10-11T17:26:39.388508923Z Received GET /s1/cpu/100 from undefined at {1602429999387912718} [RECEIVED]
2020-10-11T17:26:40.084479697Z Closed received GET /s1/cpu/100 from undefined {1602430000083806321} [CLOSED] 695.894 ms
I checked the CPU usage, using both htop and pidstat, and strangely only 1 core is being utilized all the time...
I was expecting that the node.js server receives all requests at the same time and handles them in different threads (by generating new threads), but it seems it's not the case. How can I make node.js handle requests in parallel, and utilize all the cores it has?
Here is my full code:
var express = require('express');
const crypto = require('crypto');
const now = require('nano-time');
var app = express();
function checksum(str, algorithm, encoding) {
return crypto
.createHash(algorithm || 'md5')
.update(str, 'utf8')
.digest(encoding || 'hex');
}
function getDurationInMilliseconds (start) {
const NS_PER_SEC = 1e9;
const NS_TO_MS = 1e6;
const diff = process.hrtime(start);
return (diff[0] * NS_PER_SEC + diff[1]) / NS_TO_MS;
}
app.use((req, res, next) => {
const start = process.hrtime();
console.log(`Received ${req.method} ${req.originalUrl} from ${req.headers['referer']} at {${now()}} [RECEIVED]`)
res.on('close', () => {
const durationInMilliseconds = getDurationInMilliseconds(start);
console.log(`Closed received ${req.method} ${req.originalUrl} from ${req.headers['referer']} {${now()}} [CLOSED] ${durationInMilliseconds.toLocaleString()} ms`)
});
next();
})
app.all('*/cpu', (req, res) => {
for (let $i = 0; $i < 100; $i++) {
const prime_length = 100;
const diffHell = crypto.createDiffieHellman(prime_length);
const key = diffHell.generateKeys('base64');
checksum(key);
}
res.send("Executed 100 Diffie-Hellman checksums in 1 thread(s)!");
});
module.exports = app;
app.listen(9121)

I was expecting that the node.js server receives all requests at the same time and handles them in different threads (by generating new threads), but it seems it's not the case. How can I make node.js handle requests in parallel, and utilize all the cores it has?
By design, node.js runs your Javascript only in a single thread. It my use other threads for certain things such as built-in crypto operations that have an asynchronous calling interface or disk I/O, but anything you've written in pure Javascript, it will only run in a single thread. For CPU-intensive Javascript code, you will need to specifically change the design of your code in order to use multiple CPUs for processing that CPU intensive code.
Your options are to use child processes or WorkerThreads. Probably what you want to do is to set up a set of worker threads (probably one per CPU core) and then create a queue for jobs that should be processed by a worker thread. Then, as a job is inserted into the queue, you see if there is an available worker thread and if so, you immediately send the job off to the worker thread. If not, you wait until a worker thread notifies you that it is finished and is available for the next job.
In node.js WorkerThreads are entirely separate instances of the V8 Javascript engine and you can use messages between worker threads and your main process.

I used the cluster module of node.js and followed the approach mentioned here: https://medium.com/tech-tajawal/clustering-in-nodejs-utilizing-multiple-processor-cores-75d78aeb0f4f

Related

worker thread won't respond after first message?

I'm making a server script and, to make it easier for both hosts and clients to do what they want, I made a customizable server script that runs using nw.js(with a visual interface). Said script was made using web workers since nw.js was having problems with support to worker threads.
Now that NW.js fixed their problems with worker threads, I've been trying to move all the things that were inside the web workers to worker threads, but there's a problem: When the main thread receives the answer from the second thread, the later stops responding to any subsequent message.
For example, running the following code with either NW.js or Node.js itself will return "pong" only once
const { Worker } = require('worker_threads');
const worker = new Worker('const { parentPort } = require("worker_threads");parentPort.once("message",message => parentPort.postMessage({ pong: message })); ', { eval: true });
worker.on('message', message => console.log(message));
worker.postMessage('ping');
worker.postMessage('ping');
How do I configure the worker so it will keep responding to whatever message it receives after the first one?
Because you use EventEmitter.once() method. According to the documentation this method does the next:
Adds a one-time listener function for the event named eventName. The
next time eventName is triggered, this listener is removed and then
invoked.
If you need your worker to process more than one event then use EventEmitter.on()
const worker = new Worker('const { parentPort } = require("worker_threads");' +
'parentPort.on("message",message => parentPort.postMessage({ pong: message }));',
{ eval: true });

Nodejs Cluster Architecture reading from single REDIS instance

I'm using Nodejs cluster module to have multiple workers running.
I created a basic Architecture where there will be a single MASTER process which is basically an express server handling multiple requests and the main task of MASTER would be writing incoming data from requests into a REDIS instance. Other workers(numOfCPUs - 1) will be non-master i.e. they won't be handling any request as they are just the consumers. I have two features namely ABC and DEF. I distributed the non-master workers evenly across features via assigning them type.
For eg: on a 8-core machine:
1 will be MASTER instance handling request via express server
Remaining (8 - 1 = 7) will be distributed evenly. 4 to feature:ABD and 3 to fetaure:DEF.
non-master workers are basically consumers i.e. they read from REDIS in which only MASTER worker can write data.
Here's the code for the same:
if (cluster.isMaster) {
// Fork workers.
for (let i = 0; i < numCPUs - 1; i++) {
ClusteringUtil.forkNewClusterWithAutoTypeBalancing();
}
cluster.on('exit', function(worker) {
console.log(`Worker ${worker.process.pid}::type(${worker.type}) died`);
ClusteringUtil.removeWorkerFromList(worker.type);
ClusteringUtil.forkNewClusterWithAutoTypeBalancing();
});
// Start consuming on server-start
ABCConsumer.start();
DEFConsumer.start();
console.log(`Master running with process-id: ${process.pid}`);
} else {
console.log('CLUSTER type', cluster.worker.process.env.type, 'running on', process.pid);
if (
cluster.worker.process.env &&
cluster.worker.process.env.type &&
cluster.worker.process.env.type === ServerTypeEnum.EXPRESS
) {
// worker for handling requests
app.use(express.json());
...
}
{
Everything works fine except consumers reading from REDIS.
Since there are multiple consumers of a particular feature, each one reads the same message and start processing individually, which is what I don't want. If there are 4 consumers, 1 is marked as busy and can not consumer until free, 3 are available. Once the message for that particular feature is written in REDIS by MASTER, the problem is all 3 available consumers of that feature start consuming. This means that the for a single message, the job is done based on number of available consumers.
const stringifedData = JSON.stringify(req.body);
const key = uuidv1();
const asyncHsetRes = await asyncHset(type, key, stringifedData);
if (asyncHsetRes) {
await asyncRpush(FeatureKeyEnum.REDIS.ABC_MESSAGE_QUEUE, key);
res.send({ status: 'success', message: 'Added to processing queue' });
} else {
res.send({ error: 'failure', message: 'Something went wrong in adding to queue' });
}
Consumer simply accepts messages and stop when it is busy
module.exports.startHeartbeat = startHeartbeat = async function(config = {}) {
if (!config || !config.type || !config.listKey) {
return;
}
heartbeatIntervalObj[config.type] = setInterval(async () => {
await asyncLindex(config.listKey, -1).then(async res => {
if (res) {
await getFreeWorkerAndDoJob(res, config);
stopHeartbeat(config);
}
});
}, HEARTBEAT_INTERVAL);
};
Ideally, a message should be read by only one consumer of that particular feature. After consuming, it is marked as busy so it won't consume further until free(I have handled this). Next message could only be processed by only one consumer out of other available consumers.
Please help me in tacking this problem. Again, I want one message to be read by only one free consumer and rest free consumers should wait for new message.
Thanks
I'm not sure I fully get your Redis consumers architecture, but I feel like it contradicts with the use case of Redis itself. What you're trying to achieve is essentially a queue based messaging with an ability to commit a message once its done.
Redis has its own pub/sub feature, but it is built on fire and forget principle. It doesn't distinguish between consumers - it just sends the data to all of them, assuming that its their logic to handle the incoming data.
I recommend to you use Queue Servers like RabbitMQ. You can achieve your goal with some features that AMQP 0-9-1 supports: message acknowledgment, consumer's prefetch count and so on. You can set up your cluster with very agile configs like ok, I want to have X consumers, and each can handle 1 unique (!) message at a time and they will receive new ones only after they let the server (rabbitmq) know that they successfully finished message processing. This is highly configurable and robust.
However, if you want to go serverless with some fully managed service so that you don't provision like virtual machines or anything else to run a message queue server of your choice, you can use AWS SQS. It has pretty much similar API and features list.
Hope it helps!

How to fork a process in node that writes express response

I'd like to fork a long running express request in node and send an express response with the child, allowing the parent to serve other requests. I'm already using cluster but I'd like to fork another process in addition to the cluster for specific long running requests. What I'd like to prevent is all the processes in the cluster being consumed by a specific long running processes, while most of the other requests are fast.
Thanks
var express = require('express');
var webserver = express();
webserver.get("/test", function(request, response) {
// long running HTTP request
response.send(...);
});
What I'm thinking of is something like following, although I'm not sure this works:
var cp = require('child_process');
var express = require('express');
var webserver = express();
webserver.get("/test", function(request, response) {
var child = cp.fork('do_nothing.js');
child.on("message", function(message) {
if(message == "start") {
response.send(...);
process.exit();
}
});
child.send("start");
});
Let me know if anyone knows how to do this.
Edit: So, the idea is that the child could take a long time. There are a limited number of processes in the cluster serving express responses and I don't want to consume them all on a specific long-running request type. In the code below, the entire cluster would be consumed by the long running express requests.
while(1) {
if(rand() % 100 == 0) {
if(fork() == 0) {
sleep(hour(1));
exit(0);
}
} else {
sleep(second(1));
}
waitpid(WAIT_ANY, &status, WNOHANG);
}
Edit: I am going to mark the self-answer as solved. I'm sure there's a way to pass a socket to a child but it's not really necessary because the cluster master can manage all child processes. Thanks for your help.
Your second code block is confusing because it appears that you're killing the parent process with process.exit() rather than the child.
In any case, if we assume the problem is this:
You have a cluster of "regular processes".
Occasionally, you want to take an incoming request that was assigned to one of the cluster processes and pass it off to a long running child that will eventually send the response.
After sending the response, the long running child process should exit.
You have a couple options.
You can have the clustered process that was assigned the request, start up a child, send it some initial data and listen for a message back from the child. When it gets the message back from the child, it can send the response and kill the child. This appears to be what you're attempting to do in your second code block.
You can have the clustered process that was assigned the request, start up a child and reassign the request socket to the child process and the child can then own that socket from then on. When it finally sends the response, it can then exit itself.
The first is simpler because no socket assignment from one process to another is required. To implement the second, you'd have to write or find the code to do socket reassignment and then reconstituted as an express request within the child. The cluster module does something like this so the code is there to be found and learned from, but I'm not aware of a trivial way to do it.
Personally, I don't see any particular downside to the first. I suppose if the clustered process were to die for some , you'd lose the long running request socket, but hopefully you can just code your clustered processes not to die unnecessarily.
You can read this article on sending a socket to a new node.js process:
Sending a socket to a forked process
And, this node.js doc on sending a socket:
Example: sending a socket object
So, I've verified that this is not necessary for my use case, but I was able to get it working using the code below. It's not exactly what the OP asks for, but it works.
What it's doing is sending an instruction to the cluster master, which forks the additional process upon receipt of the slow express request.
Since the express request doesn't need to know the status of the newly forked cluster worker, it just handles the slow request as normal and then exits.
The instruction to the cluster master informs the master not to replace the dying slow express request process, so the number of workers reverts to the original number after the slow request finishes.
The pool will increase in size when there are slow requests, but revert to normal. This will prevent like 20 simultaneous slow requests from bringing down the cluster.
var numberOfWorkers = 10;
var workerCount = 0;
var slowRequestPids = { };
if (cluster.isMaster) {
for(var i = 0; i < numberOfWorkers; i++) {
workerCount++;
cluster.fork();
}
cluster.on('exit', function(worker) {
workerCount--;
var pidString = String(worker.process.pid);
if(pidString in slowRequestPids) {
delete slowRequestPids[pidString];
if(workerCount >= numberOfWorkers) {
logger.info('not forking replacement for slow process');
return;
}
}
logger.info('forking replacement for a process that died unexpectedly');
workerCount++;
cluster.fork();
}
cluster.on("message", function(msg) {
if(typeof msg.fork != "undefined" && workerCount < 100) {
logger.info("forking additional process upon slow request");
slowRequestPids[msg.fork] = 1;
workerCount++;
cluster.fork();
}
});
return;
}
webserver.use("/slow", function(req, res) {
process.send({fork: String(process.pid) });
sleep.sleep(300);
res.send({ response_from: "virtual child" });
res.on("finish", function() {
logger.info('process exits, restoring cluster to original size');
process.exit();
});
});

Node cluster; only one process being used

I'm running a clustered node app, with 8 worker processes. I'm giving output when serving requests, and the output includes the ID of the process which handled the request:
app.get('/some-url', function(req, res) {
console.log('Request being handled by process #' + process.pid);
res.status(200).text('yayyy');
});
When I furiously refresh /some-url, I see in the output that the same process is handling the request every time.
I used node load-test to query my app. Again, even with 8 workers available, only one of them handles every single request. This is obviously undesirable as I wish to load-test the clustered app to see the overall performance of all processes working together.
Here's how I'm initializing the app:
var cluster = require('cluster');
if (cluster.isMaster) {
for (var i = 0; i < 8; i++) cluster.fork();
} else {
var app = require('express')();
// ... do all setup on `app`...
var server = require('http').createServer(app);
server.listen(8000);
}
How do I get all my workers working?
Your request does not use any ressources. I suspect that the same worker is always called, because it just finishes to handle the request before the next one comes in.
What happens if you do some calculation inside that takes more time than the time needed to handle a request ? As it stands, the worker is never busy between accepting a request and answering it.

How to increase event loop capacity in nodejs?

I know that Node.js uses a single-thread and an event loop to process requests only processing one at a time (which is non-blocking). But i am unable to determine Event loop capacity to run 100k request per second.
Here i want to capacity planning for nodejs server to handle the 100k request per second.
Please let me know how can i determine the capacity of event loop to increase capacity.
A single instance of Node.js runs in a single thread. To take advantage of multi-core systems the user will sometimes want to launch a cluster of Node.js processes to handle the load.
More info here and here
For the reference check following code for simple implementation of cluster in node.js
var cluster = require('cluster');
var express = require('express');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
for (var i = 0; i < numCPUs; i++) {
// Create a worker
cluster.fork();
}
} else {
// Workers share the TCP connection in this server
var app = express();
app.get('/', function (req, res) {
res.send('Hello World!');
});
// All workers use this port
app.listen(8080);
}
Cluster is an extensible multi-core server manager for node.js for more source check here.

Resources