I have a nodejs app that has one getUsers endpoint, this endpoint gets the first 10 users from a mongodb database and returns them in a JSON response.
Lets say this is a sync process (async wont make any sense here since i need to fetch and return the result in the same request) and the entire process takes 1 sec. So, if I have 4 CPU in a virtual machine and I am running the nodejs in cluster mode (meaning 4 nodejs processes running together).
Q1) Does this mean my app can handle 4 simultaneous request per second? If this is true, how can my app handle 1000 request per second?
Q2) I know nodejs is single threaded but Is there anyway the nodejs instance can use more than 1 thread per CPU so that it can handle more request per cpu per second?
Q1) Does this mean my app can handle 4 simultaneous request per second?
If we neglect load balancer process time, your cluster could handle around 4 RPS (requests per second).
If this is true, how can my app handle 1000 request per second?
In short, by scaling via threads or server instances.
More details will be described below.
Q2) I know nodejs is single threaded
Falsy statement.
Node.JS has at least 4 threads to handle I/O operations and few more threads for other under the hood processes.
This makes even single Node.JS instance pretty powerful to handle many async operations.
but Is there anyway the nodejs instance can use more than 1 thread per CPU
Yes, you can use worker_thread to create as many threads as you wish.
so that it can handle more request per cpu per second?
Firstly, you have to make your request handler works asynchronous so it can process more than one request at a time within single thread (using single CPU core).
Then you would like to scale it up via worker_threads to use more threads i.e. more cores to process higher amount of requests.
And that's all inside single process.
P.S. Since each new thread will use new core, there exists a recommendation to spawn around N - 1 threads where N is number of logical cores on your CPU.
Q1) Does this mean my app can handle 4 simultaneous request per second? If this is true, how can my app handle 1000 request per second?
Yes, not because of multiple instances but because Nodejs handles each incoming request asynchronously, even though the processing of a request is itself synchronous. You can handle the 4 of them with a single instance.
The number of requests that can be handled depends on your server.
I recommend doing a performance test. For example, I have recently used Grafana K6
Q2) I know nodejs is single threaded but Is there anyway the nodejs instance can use more than 1 thread per CPU so that it can handle more request per cpu per second?
No, if you want use more CPU you have to create multiple instances in cluster mode.
Check this out
Related
I want to ask some clarifying questions about NodeJS, which I think are poorly described on the resources I studied.
Many sources say that NodeJS is not suitable for complex calculations, because it is single-threaded and queries are executed sequentially
I created the simplest server on Node and wrote an endpoint that executes a request for about 10 seconds (a cycle). Next, I made 10 consecutive requests via Postman, and indeed, each subsequent request began execution only after the previous one gave a response.
Do I understand correctly that in this case, if the execution time of one endpoint is approximately 300ms, and 700 users will access the server at the same time, then the waiting time for the last user will be a critical 210,000 ms?
I also heard that the advantage of NodeJS is the ability to support a large number of simultaneous connections, then what does this mean and why is it a plus if the answer for the last person from the last question will still be very long
Another statement I came across is that libuv allows you to do many I/O operations at the same time, how does it work if NodeJS processes requests sequentially anyway?
Thank you very much!
TL;DR: I/O operations don't block the single execution thread. CPU intensive tasks DO block the thread and a NodeJS web server is not a good option in that case.
Yes, if your endpoint needs 300ms of synchronous work (cpu) to complete the operation, the last user will wait 210,000ms.
NodeJS is good at handling a large number of connections when the work it needs to do is i/o bound. It is not a good choice if the endpoint needs a lot of CPU time.
I/O operations operate at a different layer and take ZERO CPU time. That means that once the I/O operation is fired, NodeJS can accept new calls to the endpoint. NodeJS then polls the Operating System for completed I/O calls whenever its not using CPU and executes the callbacks. This is what allows it to handle a large number of concurrent requests without one user needing to wait for others to finish.
I have been using Node.JS for a while and just wonder how it handles when multiple clients causing some blocking / time consuming works to response ?
Consider the following situation
1- There are many endpoints and one of them is time consuming and responds in a few seconds.
2- Suppose, 100 clients simultaneously make requests to my endpoints, which one of them takes a considerable amount of time.
Does that endpoint block all event loop and make other requests wait ?
Or , In general, Do requests block each other in Node.JS ?
If not , why ? It is single-threaded, why do not they block each other ?
Node.Js does use threads behind the scenes to perform I/O operations. To be more spesific to your question - there will be a limit where a client will have to wait for an idle thread to perform a new I/O task.
You can make an easy toy example - running several I/O tasks concurrently (by using Promise.all for instance) and measure the time it takes for each to finish. Then add a new task and repeat.
At some point you'll notice two groups. For example 4 requests took 250ms and the other 2 took 350ms (and there you get "requests blocking each other").
Node.Js is commonly refered as single threaded for its default CPU-operations excecution (in contrary to its Non-blocking I/O architecture). therefore it won't be very wise using it for intensive CPU operations, but very efficient when it comes to I/O operations.
I am using Spring Boot and Java 8
For calling an api with 1 employee id it takes 1 miliseconds .So if I am calling API 100,000 times with 100,000 times with different employee id
why it is taking hours and not 100,000*1 millis i.e just 1.6 minutes
SpringBoot uses a thread pool to manage the workload for working on tasks. Thus, the max worker threads, is set as 200 by default.
Though this is a good number, the number of threads that can work in parallel depends upon the CPU time slicing and availability of backend resources. Assuming, that the backend resources are unlimited, the throughput will solely depend upon the CPU time available for each thread. In a multi-core CPU, it would be the maximum cores available and are able to serve the embedded tomcat container.
As Spring is a blocking framework, for a normal quad-core single CPU environment (assuming that all 4 cores are able to serve), this no. is 4. This means a maximum of 4 requests can be served in parallel. Rest all are likely to be queued and taken up when the next CPU slice is available.
Mathematical analysis:
Time taken by the API to process 1 request = 1ms
Time is taken by the API to process 4 concurrent requests = 1ms
Time taken by the API to process 1000,000 concurrent requests = 1000000 / 4 = 250 secs
This is just the best-case scenario. In real scenarios, all the CPUs are less likely to provide a time slice at the same instant. So, you are likely to see differences.
In such scenarios, it would be better to use the Spring Reactive than the conventional Spring framework in SpringBoot.
The API you're pulling from could be limit the amount of requests you can pull in a certain period of time. If you don't have access to the API source, I would attempt to run larger and larger numbers of pulls until you notice it takes significantly longer.
Well the time needed to get response from a web server depends on it's hosting machine and environment.
Usually in a single machine a limited number of thread inside a thread pool and each request is bound with one thread. So while making concurrent request every time certain number of request is processed within the available threads and rest awaits in the queue.
This can be the reason that your requests are taking a while to get response or even some of them can get a request time out.
Does anyone know how many requests a basic single instance of node http server can handle per second without queueing any requests?
Actually, i need to write a nodejs app, that should be able to respond to about thousand incoming requests within 100ms consistently. I'm trying to test it in a 4 cpus server and running 4 instances in cluster mode. but currently it only able to handle less than 1000 requests per second consistently within 100ms
See this answer:
How many clients can an http-server can handle?
It's not about queuing but about how many concurrent requests Node can handle at the same time. It contains example benchmarks that you can use and tweak to your needs.
Now, about queuing: in a sense, every request is queued because only one thing can run at the same time in one Node process. So you can say that the answer to how many requests a Node http server can handle without queuing any requests is: one. Just like for Nginx for example.
Now, the time to respond is an entirely different matter and depends on many factors like what you actually do in those requests handlers.
For example a mean time per request that I got in my experiments here for 10000 concurrent connections was less than 2 milliseconds. If it does more then it can take more of course but there is no one number for all Node servers. It depends how efficient is your implementation.
Now, a big no-no for handling concurrency in Node are:
Never use any "Sync" function
Never use long running for and while loops
Never do any CPU-intensive work in your process that handles the reqests
So I'm starting to use node.js for a project I'm doing.
When a client makes a request, My node.js server fetches from another server a json and then reformats it into a new json that gets served to this client. However, the json that the node server got from the other server can potentially be pretty big and so that "massaging" of data is pretty cpu intensive.
I've been reading for the past few hours how node.js isn't great for cpu tasks and the main response that I've seen is to spawn a child-process (basically a .js file running through a different instance of node) that deals with any cpu intensive tasks that might block the main event loop.
So let's say I have 20,000 concurrent users, that would mean it would spawn 20,000 os-level jobs as it's running these child-processes.
Does this sound like a good idea? (A different web server would just create 20,000 threads on the same process.)
I'm not sure if I should be running a child-process. But I do need to make a non-blocking cpu intensive task. Any ideas of what I should do?
The people who say that don't know how to architect solutions.
NodeJS is exactly what it says, It is a node, and should be treated like such.
In your example, your node instance connects to an external api and grabs json to process and send back.
i.e.
1. Get // server.com/getJSON
2. Process the json
3. Post // server.com/postJSON
So what do you do?
Ask yourself is time an issue? if so then node isnt the solution
However if you are more interested in raw processing power so instead of 1 request done in 4 seconds
You are interested in 200 requests finishing in 10 seconds, but each individual one taking about the full 10 seconds.
Diagnose how long your JSON should take to massage, if it is less than 1 second.
Just run 4 node instances instead of 1.
However if its more complex than that, Break the json into segments to process. And use asynchronous callbacks to process each segment
process.nextTick(function( doprocess(segment1); process.nextTick(function() {doprocess(segment2)
each doProcess calls the next doProcess
Node js will trade time between requests.
Now Take that solution and scale it too 4 node instances per server, and 2-5 servers
and suddenly you have an extremely scaleable and cost effective solution.