Running multiple instances of the same script NodeJS - Child Processes - node.js

I have a script which queries an API, the input to the script is a unique user ID which will determine what endpoint is queried on the API.
I want to spawn multiple instances of this script for the given X number of users and wondering what is the best solution for this?
I have looked into NodeJS child processes, is this the main way of solving this problem within node, spawning multiple child processes from a main process? Are all these processes then running on one thread I assume as NodeJS is single threaded?
The script that I want to spawn multiple process's of will be running constantly once it is started, querying an API for data every second. I guess it will have to factor in how much compute power I have available but generally how scalable is the child processes way of doing things?
Also in the background would a NodeJS child process be doing the same thing as if I was to run a bash command index.js & and spawn different processes? (Aside from being able to control the child processes then from the main process in NodeJS)

What you require can be done using a process manager like pm2.
You can read up more about pm2 here.
To launch multiple instances as a cluster using pm2 with the following syntax.
pm2 start server.js -i 4
The above command would launch 4 instances of server.js. Also note that you do a lot of configuration using a config file (read docs for it) and it even supports multi-threading.

You can try this cli tool to achive the thing you need. The library will ask for a script to spawn and will cycle it's execution with multiple child process.

Related

Two node.js processes are running while I don't suppose that. Can I kill them?

While I don't suppose that there are any processes for node.js on my windows computer, but there are two node.js processes that I can see on task manager.
I don't mean to run any node.js right now. But I have two node.js processes like above.
I had run node.js process by pm2 module before, so it affects badly maybe.
Is it ok to kill the two processes manually from task manager? Or, either process has any other purposes than executing program I wrote, so I should keep either of them or both?

Detect "master" process in iisnode

I'm running a node.js app in a 32 core machine with iisnode. IIS creates 32 processes to use all the CPU cores effectively and uses named pipes for each process.
I need to start a small task scheduler and run some other code but only in one of the processes, I don't want the 32 processes to run the same code at the same time. As this is not a node.js cluster I can't use cluster.isMaster and I'm not aware that iisnode gives an ID to each process as PM2 does (read here).
Is there an easy way to run some code but only in one of all the created processes? I know I could use database lock but I was hoping to find a simpler way before having to do that.
I ended up using the proper-lockfile package that worked perfectly in my case!

node.js with child processes in docker

I have a node.js web application that runs on my amazon aws server using nginx and pm2. The application processes files for the user, which is done using a job system and child processes. In short, when the application starts via pm2, i create a child process for each cpu core of the server. Each child process (worker) then completes jobs from the job queue.
My question is, could i replicate this in docker or would i need to modify it somehow. One assumption i had was that i would need to create a container for the database, one container for the application, and then multiple worker containers to do the processing, so that if one crashes i just spin up another worker.
I have been doing research online, including a udemy course to get my head around this stuff, but i haven't come across an example or something i can relate to my problem/question.
Any help, reading material or suggestions would be greatly appreciated.
Containers run at the same performance level as the host OS. There is no process performance hit. I created a whitepaper with Docker and HPE on this.
You wouldn't use pm2 or nodemon, which are meant to start multiple processes of your node app and restart them if they fail. That's the job of Docker now.
If in Swarm, you'd just increase the replica count of your service to be similar to the number of CPU/threads you'd want to run at the same time in the swarm.
I don't mention the nodemon/pm2 thing for Swarm in my node-docker-good-defaults so I'll at that as an issue to update it for.

What are the effective differences between child_process.fork and cluster.fork?

I understand that cluster.fork will allow for multiple processes to listen on the same port(s), what I also want to know is how much additional overhead is there in supporting this when some of your workers are not listeners/handlers for the tcp service?
I have a service that I also want to launch a couple of workers.. ex: 2 web service listener processes, and 3 worker instances. Is it best to use cluster for them all, or would cluster for the 2 web services, and child_process for the workers be better?
I don't know the internals in node, but think it would be nice for myself and others to have a better understanding of which route to take given different needs. For now, I'm using cluster for all the processes.
cluster.fork is implemented on top of child_process.fork. The extra stuff that cluster.fork brings is that, it will enable you to listen on a shared port. If you don't want it, just use child_process.fork. So yeah, use cluster for web servers and child_process for workers.
Cluster is a module of Node.js that contains sets of functions and properties that helps the developers for forking processes through which they can take advantage of the multi-core system.
With the cluster module, the creation and sharing of child processes and several parts become easy. In a single thread, the individual instance of node.js runs specifically and to take advantage of various ecosystems, a cluster of node.js is launched, to distribute the load.
A developer can access Operating System functionalities by the child_process module, this happens by running any system command inside a child process. The child process input streams can be controlled and the developer can also listen to the output stream.
A child process can be easily spun using Node’s child_process module and these child processes can easily communicate with each other with the help of a messaging system

Best method for Node.JS forking?

I'm writing a trajectory predictor in Node.JS. You may think it's a funny language to write one in, but it's actually working great. Now, I want to be able to start the predictor from a web interface in Node.JS.
The actual predictor process takes about 5 minutes to run. So to spawn it from the Node web process, I don't want the web process waiting for the child process to finish. What is the best method of forking, in Node.JS, to allow for spawning and releasing a process like this?
Use the built-in child_process node module: http://nodejs.org/api/child_process.html

Resources