standalone node scripts needs clustering? - node.js

I have node standalone workers for data processing.
Depend upon my server capability, I will be running multiple instances of worker
so my question is, do I need to include clustering in my worker,
because clustering main task is to make use of multiple cores of cpu but it holds true only if we are serving http requests
But in my case if I am running 4 instances and then each instance will follow clustering, so I think its not recommended to use clustering for standalone scripts

The built-in cluster module would be useful in this case, but certainly not necessary. If you have some queue that each instance can pull from without conflict, then you're probably already running as well as you can.
If you want to manage/broker the work for the child workers, then you can use the cluster module to start sub processes and hand out the tasks from your master process.

Related

Which approach is used by PM2 to run the application in cluster mode?

PM2 uses the node cluster module to run the application cluster mode.
The cluster module supports two methods of distributing incoming connections.
The first one (and the default one on all platforms except Windows) is the round-robin approach, where the primary process listens on a port, accepts new connections, and distributes them across the workers in a round-robin fashion, with some built-in smarts to avoid overloading a worker process.
The second approach is where the primary process creates the listen socket and sends it to interested workers. The workers then accept incoming connections directly.
So my question is, Which approach is used by PM2 to run the application in cluster mode?
If you know the answer let me know.

Node.js Cluster module - Multiprocessing or Parallel Processing?

I built a Node.js application that is built up of smaller components (children worker processes) that all perform different tasks, “in parallel”. When the main program runs, it spawns the workers and they begin their work. I used Node.js’s cluster module to accomplish this.
Is this an example of multiprocessing or parallel processing, and why?
Clustering is better to say a load balancer rather than parallel processing.
Node JS is single threaded, it means if you have 4 cores, it will use one regardless of available cores, that's fork mode, Clustering uses one node thread on all available cores, that's an optimized way of doing things and load balancing.

Should You Use PM2, Node Cluster, or Neither in Kubernetes?

I am deploying some NodeJS code into Kubernetes. It used to be that you needed to run either PM2 or the NodeJS cluster module in order to take full advantage of multi-core hardware.
Now that we have Kubernetes, it is unclear if one must use one or the other, to get the full benefit of multiple cores.
Should a person specify the number of CPU units in their pod YAML configuration?
Or is there simply no need to account for multiple cores with NodeJS in Kubernetes?
You'll achieve utilization of multiple cores either way; the difference being that with the nodejs cluster module approach, you'd have to "request" more resources from Kubernetes (i.e., multiple cores), which might be more difficult for Kubernetes to schedule than a few different containers requesting one core (or less...) each (which it can, in turn, schedule on multiple nodes, and not necessarily look for one node with enough available cores).

clustering in node.js using mesos

I'm working on a project with Node.js that involves a server. Now due to large number of jobs, I need to perform clustering to divide the jobs between different servers (different physical machines). Note that my jobs has nothing to do do with internet, so I cannot use stateless connection (or redis to keep state) and a load balancer in front of the servers to distribute the connection.
I already read about the "cluster" module, but, from what i understood, it seems to scale only on multiprocessors on the same machine.
My question: is there any suitable distributed module available in Node.js for my work? What about Apache mesos? I have heard that mesos can abstract multiple physical machines into a single server? is it correct? If yes, it is possible to use the node.js cluster module on top of the mesos, since now we have only one virtual server?
Thanks
My question: is there any suitable distributed module available in Node.js for my work?
Don't know.
I have heard that mesos can abstract multiple physical machines into a single server? is it correct?
Yes. Almost. It allows you to pool resources (CPU, RAM, DISK) across multiple machines, gives you ability to allocate resources for your applications, run and manage the said applications. So you can ask Mesos to run X instances of node.js and specify how much resource does each instance needs.
http://mesos.apache.org
https://www.cs.berkeley.edu/~alig/papers/mesos.pdf
If yes, it is possible to use the node.js cluster module on top of the mesos, since now we have only one virtual server?
Admittedly, I don't know anything about node.js or clustering in node.js. Going by http://nodejs.org/api/cluster.html, it just forks off a bunch of child workers and then round robins the connection between them. You have 2 options off the top of my head:
Run node.js on Mesos using an existing framework such as Marathon. This will be fastest way to get something going on Mesos. https://github.com/mesosphere/marathon
Create a Mesos framework for node.js, which essentially does what cluster node.js is doing, but across the machines. http://mesos.apache.org/documentation/latest/app-framework-development-guide/
In both these solutions, you have the option of letting Mesos create as many instances of node.js as you need, or, use Mesos to run cluster node.js on each machine and let it manage all the workers on that machine.
I didn't google, but there might already be a node.js mesos framework out there!

How to make sure that nodejs cluster assigns processes to different cores?

When utilizing multi cores via Node.js' cluster module is it guaranteed that each forked node worker is assigned to a different core?
If it's not guaranteed is there any way to control or manage it and eventually guarantee that all end up in different cores? Or the OS' scheduler distributes them evenly?
A while ago I did some tests with cluster module which you can check in this post that I wrote. Looking at the system monitor screenshots it is this pretty straightforward to understand what happens under the hood (with and without cluster module).
It is indeed up to the OS to distribute processes over the cores. You could obtain the pid of a child process and use an external utility to set the CPU affinity of that process. That would, of course, not be very portable.

Resources