Multiple socket i/o using node - node.js

Here is a summary of my application requirement. The app needs to process a batch of 10000 items and then upload the processed data on multiple servers using socket i/o. After the upload is done, move to the next set of 1000. I know in java this would mean creating multiple threads and start uploading simultaneously. Since nodejs is single threaded, i'm not sure how can i achieve the same effect of making simultaneous connection and uploading in parallel. Can anyone give me some pointers or sample sudo code for guidance.

Check out threads_a_gogo : https://github.com/xk/node-threads-a-gogo
It lets you create -up to thousands of- JavaScript threads to run JS code in parallel with node's main thread, using all the available cpu cores, in v8 isolates, from within a single node process.
I know this is very experimental, but since this is a simple project you're working on, it should fit the bill.

Related

Does NodeJS require a multi cores VPS

I want to develop a website with Nuxt.js or Next.js in 1 core CPU 2.4Ghz, 1GB RAM.
Can my website run fast as a start?
How many requests per seconds will be available maybe?
Whether a Node application benefits from multiple cores is application dependent.
Generally, if the child process or cluster modules are not involved,
then there is no need to have multiple cores on your system because Node.js will only use one core as the request handler always runs on the same event loop, which runs on a single thread.
How to achieve process concurrency and high throughput:
Because JavaScript execution in Node.js is single-threaded, so a good rule of thumb for keeping your Node server speedy: is to avoid blocking the event loop. You can read about this in the official documentation in my reference below.
Simple Illustration:
Consider a case where each request to a web server takes 50ms to complete and 45ms of that 50ms is database I/O that can be done asynchronously.
Choosing non-blocking asynchronous operations frees up that 45ms per request to handle other requests.
This is a significant difference in your application capacity and processing speed just by choosing to use non-blocking methods instead of blocking methods.
Reference:
https://nodejs.org/en/docs/guides/dont-block-the-event-loop/
https://nodejs.org/en/docs/guides/blocking-vs-non-blocking/
I hope this helps.

Loading Streaming Data from RabbitMQ to Postgres in Parallel

I'm still somewhat new to Node.js, so I'm not as conversant in how parallelism works with concurrent I/O operations as I'd like to be.
I'm planning a Node.js application to load streaming data from RabbitMQ to Postgres. These loads will happen during system operation, so it is not a bulk load.
I expect throughput requirements to be fairly low to start (maybe 50-100 records per minute). But I'd like to plan the application so it can scale up to higher volumes as the requirements emerge.
I'm trying to think through how parallelism would work. My first impressions of flow and how parallelism would be introduced is:
Message read from the queue
Query to load data into Postgres kicked off, which pushes callback to the Node stack
Event loop free to read another message from the queue, if available, which will launch another query
Repeat
I believe the queries kicked off in this fashion will run in parallel up to the number of connections in my PG connection pool. Is this a good assumption?
With this simple flow, the limit on parallel queries would seem to be the size of the Postgres connection pool. I could make that as big as required for throughput (and that the server and backend database can handle) and that would be the limiting factor on how many messages I could process in parallel. Does that sound right?
I haven't located a great reference on how many parallel I/Os Node will instantiate. Will Node eventually block as my event loop generates too many I/O requests that aren't yet resolved (if not, I assume pg will put my query on the callback stack when I have to wait for a connection)? Are there dials I can turn to affect these limits by setting switches when I launch Node? Am I assuming correctly that libuv and the "pg" lib will in fact run these queries in parallel within one Node.js process? If those assumptions are correct, I'd think I'd hit connection pool size limits before I'd run into libuv parallelism limits (or possibly at the same time if I size my connection pool to the number of cores on the server).
Also, related to the discussion above about Node launching parallel I/O requests, how do I prevent Node from pulling messages off the queue as quick as they come in and queuing up I/O requests? I'd think at some point this could cause problems with memory consumption. This relates back to my question about startup parameters to limit the amount of parallel I/O requests created. I don't understand this too well at this point, so maybe it's not a concern (maybe by default Node won't create more parallel I/O requests than cores, providing a natural limit?).
The other thing I'm wondering is when/how running multiple copies of this program in parallel would help? Does it even matter on one host since the Postgres connection pool seems to be the driver of parallelism here? If that's the case, I'd probably only run one copy per host and only run additional copies on other hosts to spread the load.
As you can see, I'm trying to get some basic assumptions right before I start down this road. Insight and pointers to good reference doc would be appreciated.
I resolved this with a test of the prototype I wrote. A few observations:
If I don't set pre-fetch on the RabbitMQ channel, Node will pull ALL the messages off the queue in seconds. I did a test with 100K messages off the queue and Node pulled all 100K off in seconds, though it took many minutes to actually process the messages.
The behavior mentioned in #1 above is not desireable, because then Node must cache all the messages in memory. In my test, Node took up 2GB when pulling down all those message quickly, whereas if I set pre-fetch to match the number of database connections, Node took up only 80 MB and drained the queue slowly, as it finished processing the messages and sent back ACKs.
A single instance of Node running this program kept my CPUs 100% utilized.
So, the morals of the story seem to be:
Node can spawn any number of async I/O handlers (limited by available memory)
In a case like this, you want to limit how many async I/O requests Node spawns to avoid excessive memory usage.
Creating additional child processes for this workload made no difference. The unit of parallelism was the size of the database connection pool. If my workload did more in JavaScript instead of just delegating to Postgres, additional child processes would help. But in this case, it's all I/O (and thankfully I/O that doesn't need the Node threadpool), so the additional child processes do nothing.

API getting Slow due to Iteration using Node JS

I am using Node js for creating a REST API.
In my scenario i have two API's.
API 1 --> Have to get 10,000 records and make a iteration to modify some of the data
API 2: Simple get method.
When i open post man and hit the first API and Second API parallel
Because of Node JS is single threaded Which Causes second API slower for getting response.
My Expectation:
Even though the 1st API getting time it should not make the 2nd API for large time.
From Node JS docs i have found the clustering concept.
https://nodejs.org/dist/latest-v6.x/docs/api/cluster.html
So i implemented Cluster it created 4 server instance.
Now i hit the API 1 in one tab and API 2 in second tab it worked fine.
But when i opened API 1 in 4 tabs and 5th tab again API 2 which causes the slowness again.
What will be the best solution to solve the issue?
Because of the single threaded nature of node.js, the only way to make sure your server is always responsive to quick requests such as you describe for API2 is to make sure that you never have any long running operations in your server.
When you do encounter some operation in your code that takes awhile to run and would affect the responsiveness of your server, your options are as follows:
Move the long running work to a new process. Start up a new process and run the length operation in another process. This allows your server process to stay active and responsive to other requests, even while the long running other process is still crunching on its data.
Start up enough clusters. Using the clustering you've investigated, start up more clusters than you expect to have simultaneous calls to your long run process. This allows there to always be at least one clustered process that is available to be responsive. Sometimes, you cannot predict how many this will be or it will be more than you can practically create.
Redesign your long running process to execute its work in chunks, returning control to the system between chunks so that node.js can interleave other work it is trying to do with the long running work. Here's an example of processing a large array in chunks. That post was written for the browser, but the concept of not blocking the event loop for too long is the same in node.js.
Speed up the long running task. Find a way to speed up the long running job so it doesn't take so long (using caching, not returning so many results at once, faster way to do it, etc...).
Create N worker processes (probably one less worker process than the number of CPUs you have) and create a work queue for the long running tasks. Then, when a long running request comes in, you insert it in the work queue. Then, each worker process is free to work on items in the queue. When more than N long tasks are being requested, the first ones will get worked on immediately, the later ones will wait in the queue until there is a worker process available to work on them. But, most importantly, your main node.js process will stay free and responsive for regular requests.
This last option is the most foolproof because it will be effective to any number of long running requests, though all of the schemes can help you.
Node.js actually is not multi-threaded, so all of these requests are just being handled in the event loop of a single thread.
Each Node.js process runs in a single thread and by default it has a memory limit of 512MB on 32 bit systems and 1GB on 64 bit systems.
However, you can split a single process into multiple processes or workers. This can be achieved through a cluster module. The cluster module allows you to create child processes (workers), which share (or not) all the server ports with the main Node process.
You can invoke the Cluster API directly in your app, or you can use one of many abstractions over the API
https://nodejs.org/api/cluster.html

Node.js Clusters with Additional Processes

We use clustering with our express apps on multi cpu boxes. Works well, we get the maximum use out of AWS linux servers.
We inherited an app we are fixing up. It's unusual in that it has two processes. It has an Express API portion, to take incoming requests. But the process that acts on those requests can run for several minutes, so it was build as a seperate background process, node calling python and maya.
Originally the two were tightly coupled, with the python script called by the request to upload the data. But this of course was suboptimal, as it would leave the client waiting for a response for the time it took to run, so it was rewritten as a background process that runs in a loop, checking for new uploads, and processing them sequentially.
So my question is this: if we have this separate node process running in the background, and we run clusters which starts up a process for each CPU, how is that going to work? Are we not going to get two node processes competing for the same CPU. We were getting a bit of weird behaviour and crashing yesterday, without a lot of error messages, (god I love node), so it's bit concerning. I'm assuming Linux will just swap the processes in and out as they are being used. But I wonder if it will be problematic, and I also wonder about someone getting their web session swapped out for several minutes while the longer running process runs.
The smart thing to do would be to rewrite this to run on two different servers, but the files that maya uses/creates are on the server's file system, and we were not given the budget to rebuild the way we should. So, we're stuck with this architecture for now.
Any thoughts now possible problems and how to avoid them would be appreciated.
From an overall architecture prospective, spawning 1 nodejs per core is a great way to go. You have a lot of interdependencies though, the nodejs processes are calling maya which may use mulitple threads (keep that in mind).
The part that is concerning to me is your random crashes and your "process that runs in a loop". If that process is just checking the file system you probably have a race condition where the nodejs processes are competing to work on the same input/output files.
In theory, 1 nodejs process per core will work great and should help to utilize all your CPU usage. Linux always swaps the processes in and out so that is not an issue. You could start multiple nodejs per core and still not have an issue.
One last note, be sure to keep an eye on your memory usage, several linux distributions on EC2 do not have a swap file enabled by default, running out of memory can be another silent app killer, best to add a swap file in case you run into memory issues.

Controlling the flow of requests without dropping them - NodeJS

I have a simple nodejs webserver running, it:
Accepts requests
Spawns separate thread to perform background processing
Background thread returns results
App responds to client
Using Apache benchmark "ab -r -n 100 -c 10", performing 100 requests with 10 at a time.
Average response time of 5.6 seconds.
My logic for using nodejs is that is typically quite resource efficient, especially when the bulk of the work is being done by another process. Seems like the most lightweight webserver option for this scenario.
The Problem
With 10 concurrent requests my CPU was maxed out, which is no surprise since there is CPU intensive work going on the background.
Scaling horizontally is an easy thing to, although I want to make the most out of each server for obvious reasons.
So how with nodejs, either raw or some framework, how can one keep that under control as to not go overkill on the CPU.
Potential Approach?
Could accepting the request storing it in a db or some persistent storage and having a separate process that uses an async library to process x at a time?
In your potential approach, you're basically describing a queue. You can store incoming messages (jobs) there and have each process get one job at the time, only getting the next one when processing the previous job has finished. You could spawn a number of processes working in parallel, like an amount equal to the number of cores in your system. Spawning more won't help performance, because multiple processes sharing a core will just run slower. Keeping one core free might be preferred to keep the system responsive for administrative tasks.
Many different queues exist. A node-based one using redis for persistence that seems to be well supported is Kue (I have no personal experience using it). I found a tutorial for building an implementation with Kue here. Depending on the software your environment is running in though, another choice might make more sense.
Good luck and have fun!

Resources