One of my api in nodejs does lot of CRUD operations are finally returns, sometimes it takes 10-15 seconds to return. How to offload this task somehow so the response time is quick. And how to handle error occured in offloaded task as response would have gone already?
Currently we are using async module for making parallel calls to cassandra. Some people suggest to use compute-cluster module in nodejs, but to me it would be helpful for computation intensive task. I think my problem heavy on i/o not computation.
Related
I want to ask some clarifying questions about NodeJS, which I think are poorly described on the resources I studied.
Many sources say that NodeJS is not suitable for complex calculations, because it is single-threaded and queries are executed sequentially
I created the simplest server on Node and wrote an endpoint that executes a request for about 10 seconds (a cycle). Next, I made 10 consecutive requests via Postman, and indeed, each subsequent request began execution only after the previous one gave a response.
Do I understand correctly that in this case, if the execution time of one endpoint is approximately 300ms, and 700 users will access the server at the same time, then the waiting time for the last user will be a critical 210,000 ms?
I also heard that the advantage of NodeJS is the ability to support a large number of simultaneous connections, then what does this mean and why is it a plus if the answer for the last person from the last question will still be very long
Another statement I came across is that libuv allows you to do many I/O operations at the same time, how does it work if NodeJS processes requests sequentially anyway?
Thank you very much!
TL;DR: I/O operations don't block the single execution thread. CPU intensive tasks DO block the thread and a NodeJS web server is not a good option in that case.
Yes, if your endpoint needs 300ms of synchronous work (cpu) to complete the operation, the last user will wait 210,000ms.
NodeJS is good at handling a large number of connections when the work it needs to do is i/o bound. It is not a good choice if the endpoint needs a lot of CPU time.
I/O operations operate at a different layer and take ZERO CPU time. That means that once the I/O operation is fired, NodeJS can accept new calls to the endpoint. NodeJS then polls the Operating System for completed I/O calls whenever its not using CPU and executes the callbacks. This is what allows it to handle a large number of concurrent requests without one user needing to wait for others to finish.
I have been using Node.JS for a while and just wonder how it handles when multiple clients causing some blocking / time consuming works to response ?
Consider the following situation
1- There are many endpoints and one of them is time consuming and responds in a few seconds.
2- Suppose, 100 clients simultaneously make requests to my endpoints, which one of them takes a considerable amount of time.
Does that endpoint block all event loop and make other requests wait ?
Or , In general, Do requests block each other in Node.JS ?
If not , why ? It is single-threaded, why do not they block each other ?
Node.Js does use threads behind the scenes to perform I/O operations. To be more spesific to your question - there will be a limit where a client will have to wait for an idle thread to perform a new I/O task.
You can make an easy toy example - running several I/O tasks concurrently (by using Promise.all for instance) and measure the time it takes for each to finish. Then add a new task and repeat.
At some point you'll notice two groups. For example 4 requests took 250ms and the other 2 took 350ms (and there you get "requests blocking each other").
Node.Js is commonly refered as single threaded for its default CPU-operations excecution (in contrary to its Non-blocking I/O architecture). therefore it won't be very wise using it for intensive CPU operations, but very efficient when it comes to I/O operations.
I want this kind of structure;
express backend gets a request and runs a function, this function will get data from different apis and saves it to db. Because this could takes minutes i want it to run parallel while my web server continues to processing requests.
i want this because of this scenario:
user has dashboard, after logs in app starts to collect data from apis and preparing the dashboard for user, at that time user can navigate through site even can close the browser but the function has to run until it finishes fetching data.Once it finishes, all data will be saved db and the dashboard is ready for user.
how can i do this using child_process or any kind of structure in nodejs?
Since what you're describing is all async I/O (networking or disk) and is not CPU intensive, you don't need multiple child processes in order to effectively serve multiple requests. This is the beauty of node.js. With async I/O, node.js can be working on many different requests over the same period of time.
Let's supposed part of your process is downloading an image. Your node.js code sends a request to fetch an image. That request is sent off via TCP. Immediately, there is nothing else to do on that request. It's winging it's way to the destination server and the destination server is preparing the response. While all that is going on, your node.js server is completely free to pull other events from it's event queue and start working on other requests. Those other requests do something similar (they start async operations and then wait for events to happen sometime later).
Your server might get 10 different async operations started and "in flight" before the first one actually starts getting a response. When a response starts coming in, the system puts an event into the node.js event queue. When node.js has a moment between other requests, it pulls the next event out of the event queue and processes it. If the processing has further async operations (like saving it to disk), the whole async and event-driven process starts over again as node.js requests a write to disk and node.js is again free to serve other events. In this manner, events are pulled from the event queue one at a time as they become available and lots of different operations can all get worked on in the idle time between async operations (of which there is a lot).
The only thing that upsets the apple cart and ruins the ability of node.js to juggle lots of different things all at once is an operation that takes a lot of CPU cycles (like say some unusually heavy duty crypto). If you had something like that, it would "hog" too much of the CPU and the CPU couldn't be effectively shared among lots of other operations. If that were the case, then you would want to move the CPU-intensive operations to a group of child processes. But, just doing async I/O (disk, networking, other hardware ports, etc...) does not hog the CPU - in fact it barely uses much node.js CPU.
So, the next question is often "how do I know if I have too much stuff that uses the CPU". The only way to really know is to just code your server properly using async I/O and then measure its performance under load and see how things go. If you're doing async things appropriately and the CPU still spikes to 100%, then you have too much CPU load and you'll want to either use generic clustering or move specific CPU-heavy operations to a group of child processes.
I've gone through some introductory articles on Node.js and Event Loop and one thing is not clear - if there are multiple concurrent requests then are the responses always sequential in the order the request was made? Say if 20 requests did complete simultaneously then will the 20th response have to wait for the other 19 to be cleared (responded back to the client) ?
Update: What I was wondering is whether this is similar to how multiple setTimeouts get queued?
node.js runs Javascript as single threaded. Thus, only one piece of Javascript is running at any given time.
But, almost all I/O (e.g. networking, file access, etc...) is asynchronous and non-blocking. So, if 20 requests are made of your server in a very short period of time, the first request to reach the server will start executing it's request handler and the other requests will be queued. But, as soon as the first request hits an asynchronous operation (such as reading from the local file system), that request will be suspended while the non-blocking asynchronous I/O is taking place and the next request in line will start to run.
This second request will then run until it either finishes or until it also hits a piece of asynchronous I/O. When that second request is waiting on the async I/O, then another request will get to run. The system scheduler will determine if the next operation is the completion of the async I/O request from the first request or if it will start the third request that was waiting in the queue.
The various requests will continue this way until all are done. Multiple requests may be "in-flight" at the same time (meaning they've been started, but have not completed yet), but only one is ever actually executing code at any given moment.
This is sometimes referred to as cooperative tasking. There is no pre-emptive multi-tasking among the different requests where each automatically gets a time slice of the host CPU. But, any time a request hits an asynchronous I/O operation, then that tells the scheduler that other requests waiting to run can run.
This is all managed from an event queue in node.js. A piece of Javascript runs until it completes. If it makes an asynchronous I/O request and then completes, then another piece of Javascript that is also waiting to run can start to run. When it is done, the JS engine pulls the next item out of the event queue and runs it. That might be a new incoming request or it might be the completion of some asynchronous I/O operation on some other request.
The advantages of this type of system are:
It scales really well, particularly for I/O bound server operations, because you can have many requests "in-flight" at the same time with only a single Javascript thread. The cooperative tasking is very lightweight and fast.
Programming a system like this has far fewer "race conditions" to watch out for because no two pieces of Javascript are ever running at the actual same time. This means you can often share state between requests without ever having to use mutexes (like you would in a multi-thread environment). Since thread-safe bugs are often very difficult to avoid and to test for, it's a major advantage to eliminate these types of bugs.
The cooperative model is conceptually simple and easier to learn and to program safely.
The disadvantages of this type of system are:
It does not share the CPU among tasks that are CPU-bound. A node.js programmer with lots of heavy CPU-bound computations often has to use clustering or child processes to handle the heave CPU computations so as to not over-burden the main request processing Javascript thread with that work and make it too non-responsive.
Clustering of processes is required to maximize the use of multiple processors and then any shared data must be shared across those processes. People often use an in-memory database like Redis to share data between processes.
You can't just willy nilly fire up another Javascript thread to go off and do something.
I'm new to nodeJS and was wondering about the single-instance model of Node.
In a simple nodeJs application, when some blocking operation is asynchronously handled with callbacks, does the main thread running the nodeJs handle the callback as well?.
If the request is to get some data off the database, and there are 100s of concurrent users, and each db operation takes couple of seconds, when the callback is finally fired (for each of the connections), is the main thread accepting these requests used for executing the callback as well? If so, how does nodeJs scale and how does it respond so fast?.
Each instance of nodejs runs in a single thread. Period. When you make an async call to, say, a network request, it doesn't wait around for it, not in your code or anywhere else. It has an event loop that runs through. When the response is ready, it invokes your callback.
This can be incredibly performant, because it doesn't need lots of threads and all the memory overhead, but it means you need to be careful not to do synchronous blocking stuff.
There is a pretty decent explanation of the event loop at http://blog.mixu.net/2011/02/01/understanding-the-node-js-event-loop/ and the original jsconf presentation by Ryan Dahl http://www.youtube.com/watch?v=ztspvPYybIY is worth watching. Ever seen an engineer get a standing ovation for a technical presentation?