So I'm starting to use node.js for a project I'm doing.
When a client makes a request, My node.js server fetches from another server a json and then reformats it into a new json that gets served to this client. However, the json that the node server got from the other server can potentially be pretty big and so that "massaging" of data is pretty cpu intensive.
I've been reading for the past few hours how node.js isn't great for cpu tasks and the main response that I've seen is to spawn a child-process (basically a .js file running through a different instance of node) that deals with any cpu intensive tasks that might block the main event loop.
So let's say I have 20,000 concurrent users, that would mean it would spawn 20,000 os-level jobs as it's running these child-processes.
Does this sound like a good idea? (A different web server would just create 20,000 threads on the same process.)
I'm not sure if I should be running a child-process. But I do need to make a non-blocking cpu intensive task. Any ideas of what I should do?
The people who say that don't know how to architect solutions.
NodeJS is exactly what it says, It is a node, and should be treated like such.
In your example, your node instance connects to an external api and grabs json to process and send back.
i.e.
1. Get // server.com/getJSON
2. Process the json
3. Post // server.com/postJSON
So what do you do?
Ask yourself is time an issue? if so then node isnt the solution
However if you are more interested in raw processing power so instead of 1 request done in 4 seconds
You are interested in 200 requests finishing in 10 seconds, but each individual one taking about the full 10 seconds.
Diagnose how long your JSON should take to massage, if it is less than 1 second.
Just run 4 node instances instead of 1.
However if its more complex than that, Break the json into segments to process. And use asynchronous callbacks to process each segment
process.nextTick(function( doprocess(segment1); process.nextTick(function() {doprocess(segment2)
each doProcess calls the next doProcess
Node js will trade time between requests.
Now Take that solution and scale it too 4 node instances per server, and 2-5 servers
and suddenly you have an extremely scaleable and cost effective solution.
Related
I have a very simple express server with 1 endpoint listening on a certain port.
I'd like a separate scheduled script to make an API call every 10 minutes and save the data locally (in memory).
When the endpoint on my server is hit, I'd like to serve the locally cached data from memory.
How can I make sure that the scheduled cron job does not ever block the processing of requests coming in i.e. both continue to happen at the same time.
I've read about child processes and worker threads, but I'm not sure if this would be a good use case for it.
JavaScript is single-threaded, so worker threads and forking notwithstanding, every line of JavaScript code that executes blocks other lines of code from executing. Only one can execute at a time. So forgetting about the scheduled job for a moment, in an express server, multiple requests happening concurrently will compete with each other for CPU resources and block each other.
However, Node.js has asynchronous I/O. A typical request often looks a bit like this:
Receive request and run a little bit of JavaScript code (maybe 1ms).
Query an external REST API (let's say 50ms)
Run a little more JavaScript code and send a response (maybe 1ms).
In steps 1 and 3, all other requests are blocked from executing JavaScript code. They have to wait until the current request is done executing its code. In step 2 however, other requests are free to execute JavaScript code, one after the other, since the network request is non-blocking.
I explain all this because it sounds like your scheduled job might go through the same three steps mentioned above. If that's the case, then functionally it's not going to block anything any more than simultaneous requests to the server are already blocking each other, and there's little reason to worry about threading or multi-processing unless your server is under such heavy load that it cannot serve all incoming requests.
So this is not a good use case for child processes and worker threads unless the scheduled job has different characteristics than I'm assuming (for example if it spends multiple seconds crunching heavy computations in JavaScript, that would noticeably impact request response time when it runs, and might make sense to break out into a separate process or worker thread).
I want to ask some clarifying questions about NodeJS, which I think are poorly described on the resources I studied.
Many sources say that NodeJS is not suitable for complex calculations, because it is single-threaded and queries are executed sequentially
I created the simplest server on Node and wrote an endpoint that executes a request for about 10 seconds (a cycle). Next, I made 10 consecutive requests via Postman, and indeed, each subsequent request began execution only after the previous one gave a response.
Do I understand correctly that in this case, if the execution time of one endpoint is approximately 300ms, and 700 users will access the server at the same time, then the waiting time for the last user will be a critical 210,000 ms?
I also heard that the advantage of NodeJS is the ability to support a large number of simultaneous connections, then what does this mean and why is it a plus if the answer for the last person from the last question will still be very long
Another statement I came across is that libuv allows you to do many I/O operations at the same time, how does it work if NodeJS processes requests sequentially anyway?
Thank you very much!
TL;DR: I/O operations don't block the single execution thread. CPU intensive tasks DO block the thread and a NodeJS web server is not a good option in that case.
Yes, if your endpoint needs 300ms of synchronous work (cpu) to complete the operation, the last user will wait 210,000ms.
NodeJS is good at handling a large number of connections when the work it needs to do is i/o bound. It is not a good choice if the endpoint needs a lot of CPU time.
I/O operations operate at a different layer and take ZERO CPU time. That means that once the I/O operation is fired, NodeJS can accept new calls to the endpoint. NodeJS then polls the Operating System for completed I/O calls whenever its not using CPU and executes the callbacks. This is what allows it to handle a large number of concurrent requests without one user needing to wait for others to finish.
I've changed a long running process on an Node/Express route to use Bull/Redis.
I've pretty much copied this tutorial on Heroku docs.
The gist of that tutorial is: The Express route schedules the Job, immediately returns 200 to the client, and browser long polls for the job status (a ping route on Express). When the client gets a completed status, it displays it in the UI. The Worker is a separate file and is run with an addtional yarn run worker.js.
Notice the end where it recommends using Throng for clustering your workers.
I'm using this Bull Dashboard to monitor jobs/queues. This dashboard shows the available Workers, and their status (Idle when not running/ Not in Idle when running).
I've got the MVP working, but the operation is super slow. The average time to complete is 1 minute 30 second. Whereas before (before adding Bull) it was seconds to complete.
Another strange it seems to take at least 30 seconds for a Workers status to change from Idle to not Idle. Seems that a lot of the latency is waiting for the worker.
Being that the new operation is a separate file (worker.js) and throng is enabling clustering, I was expecting this operation to be super fast, but it is just the opposite.
Does anyone have an experience with this? Or pointers to help figure out what is causing this to be so slow?
Does anyone know how many requests a basic single instance of node http server can handle per second without queueing any requests?
Actually, i need to write a nodejs app, that should be able to respond to about thousand incoming requests within 100ms consistently. I'm trying to test it in a 4 cpus server and running 4 instances in cluster mode. but currently it only able to handle less than 1000 requests per second consistently within 100ms
See this answer:
How many clients can an http-server can handle?
It's not about queuing but about how many concurrent requests Node can handle at the same time. It contains example benchmarks that you can use and tweak to your needs.
Now, about queuing: in a sense, every request is queued because only one thing can run at the same time in one Node process. So you can say that the answer to how many requests a Node http server can handle without queuing any requests is: one. Just like for Nginx for example.
Now, the time to respond is an entirely different matter and depends on many factors like what you actually do in those requests handlers.
For example a mean time per request that I got in my experiments here for 10000 concurrent connections was less than 2 milliseconds. If it does more then it can take more of course but there is no one number for all Node servers. It depends how efficient is your implementation.
Now, a big no-no for handling concurrency in Node are:
Never use any "Sync" function
Never use long running for and while loops
Never do any CPU-intensive work in your process that handles the reqests
I want this kind of structure;
express backend gets a request and runs a function, this function will get data from different apis and saves it to db. Because this could takes minutes i want it to run parallel while my web server continues to processing requests.
i want this because of this scenario:
user has dashboard, after logs in app starts to collect data from apis and preparing the dashboard for user, at that time user can navigate through site even can close the browser but the function has to run until it finishes fetching data.Once it finishes, all data will be saved db and the dashboard is ready for user.
how can i do this using child_process or any kind of structure in nodejs?
Since what you're describing is all async I/O (networking or disk) and is not CPU intensive, you don't need multiple child processes in order to effectively serve multiple requests. This is the beauty of node.js. With async I/O, node.js can be working on many different requests over the same period of time.
Let's supposed part of your process is downloading an image. Your node.js code sends a request to fetch an image. That request is sent off via TCP. Immediately, there is nothing else to do on that request. It's winging it's way to the destination server and the destination server is preparing the response. While all that is going on, your node.js server is completely free to pull other events from it's event queue and start working on other requests. Those other requests do something similar (they start async operations and then wait for events to happen sometime later).
Your server might get 10 different async operations started and "in flight" before the first one actually starts getting a response. When a response starts coming in, the system puts an event into the node.js event queue. When node.js has a moment between other requests, it pulls the next event out of the event queue and processes it. If the processing has further async operations (like saving it to disk), the whole async and event-driven process starts over again as node.js requests a write to disk and node.js is again free to serve other events. In this manner, events are pulled from the event queue one at a time as they become available and lots of different operations can all get worked on in the idle time between async operations (of which there is a lot).
The only thing that upsets the apple cart and ruins the ability of node.js to juggle lots of different things all at once is an operation that takes a lot of CPU cycles (like say some unusually heavy duty crypto). If you had something like that, it would "hog" too much of the CPU and the CPU couldn't be effectively shared among lots of other operations. If that were the case, then you would want to move the CPU-intensive operations to a group of child processes. But, just doing async I/O (disk, networking, other hardware ports, etc...) does not hog the CPU - in fact it barely uses much node.js CPU.
So, the next question is often "how do I know if I have too much stuff that uses the CPU". The only way to really know is to just code your server properly using async I/O and then measure its performance under load and see how things go. If you're doing async things appropriately and the CPU still spikes to 100%, then you have too much CPU load and you'll want to either use generic clustering or move specific CPU-heavy operations to a group of child processes.