Disruptor pattern with variable-duration "business logic" - multithreading

How can the disruptor be used effectively on processes where there are variable-duration "business logic" tasks? Has it been done before?
Can it be done with a second ring buffer processing the response stage? If so, how do I go about that.
I understand the Disruptor and see some specific parts of my call chain where I could apply the concept. To be specific, the application is a middleware-type application which performs following steps:
read inbound message, unmarshall to request
identify customer details for request, determine workflow steps to process
call backend system to execute steps
collate responses, log response, marshall, and return to consumer
The issue is that some instances of the backend steps can take a "long" time, which potentially forces the response stage for short running tasks to wait for longer running tasks. Assume call of backend system can be done either async or sync - so the idea would be that the backend system call is simply a consumer that triggers an async request to backend.
Back end system response time can be anywhere from 5ms (some requests), 50ms (90% of requests) to - 5000ms (1% of requests) (think large disk I/O).
I can see the Disruptor as potentially highly efficient but can't get my head around this hurdle to keep average latency down.

Related

air traffic controller for threads when calling a REST API

DISCLAIMER: If this post is off-topic to this site, please recommend a site where this post would be appropriate.
On Ubuntu 18.04, in bash, I am writing a network-based, threaded application that requires multiple servers. It receives files through the network and processes them, ultimately making an API call that finishes the processing and logs the results to a database for later retrieval and reporting.
So far I have written the application using non-threaded programming models and concepts. That means the files are processed one at a time in real-time. This works great if there is no sudden burst of files and/or a backlog of files to process. The main bottle neck has been the way I sequentially send files to the API one after another, waiting until the entire operation has taken place for one file and the API returns the results. The API has a rate limit of 8 calls per second. But since each call takes from .75 to 1 second, my program waits until the operation is done and only processes about 1 file per second through the API. In short, I did not have to worry about scheduling API calls because I could barely do one call per second.
Since the capacity is there to process 8 files per second, and I need more speed, I have been converting my single-threaded, sequential application into a parallel, scalable, multi-threaded application. This new version can spawn enough threads to send 8 files per second to the REST API and much more. So now I have the opposite problem. I am sending too many requests per second to the REST API and am in danger of triggering penalties, etc. Ultimately, when my traffic is higher, I will upgrade my subscription to the API and get more calls per second, but this current dilemma has got me thinking about how to schedule the API calls with different threads.
The purpose of this post is to discuss an idea about how to schedule these REST API calls across various threads. Specifically, I want to discuss how to coordinate timing and usage of the API while maintaining efficiency and yet not overloading the API. In short, I want to coordinate a group of threads so that the API is properly used. Not too fast and not too slow.
Independent of my application, this idea could be useful in a number of generically similar scenarios.
My idea is to create an "air traffic controller" ("ATC") so that the threads of the application have a centralized timing authority to check when they are ready to submit files to the REST API. The ATC would know how many time slots/calls per time period (in this case, calls per second) the API can schedule. The ATC would be listening for the threads to request a time slot ("launch code") which would give them a time slot in the future to perform their API call. The ATC would decide based on the schedule of other launch codes that it has already handed out.
In my case, from the start of the upload of the file to the API, it could take 0.75 to 1 second to complete the processing and receive a response from the API. This does not affect the count of new API calls that can be performed. It is just a consideration of how long the threads will be waiting once they call the API. It may not be relevant to this overall discussion.
Each thread would obviously have to do some error handling. If the API timed out or threw an error, then the thread would have to handle it and get back in line with the ATC -if appropriate- and ask for a new launch code. Maybe it should report the error to the ATC for centralized logging?
In situations where the file processing needs burst above 8 files per second, there would be a scheduling backlog where the threads should wait their turn as assigned by the ATC.
Here are some other considerations:
Function
The ATC would be a lightweight daemon that does the following:
- listens on some TCP port
- receives a request
security token (?), thread id, priority
- authenticates the request (?)
- examines schedule
- reserves the next available time slot
- returns the launch code
security token (?), current time, launch timing offset to current time, URL and auth token for the API
- expunged expired launch codes
The ATC would need the following:
- to know what port it is supposed to run on
- to know how many slots per time period it was set to schedule
(e.g. 8 per second)
- to have a super fast read/write access to the schedule (associative array?)
- to know the URL and corresponding auth token for the thread to use
- maybe to know multiple URLs and auth tokens for load balancing
Here are more things to consider:
Security
How could we keep the ATC secure while ensuring high performance?
Network-level security (e.g. firewalls allowing only the IP addresses of the file-processing servers?)
Auth tokens or logins and passwords?
Performance
What would the requirements be for this ATC server? Would this be taxing to a CPU and memory?
Timing
How often would an NTP call be needed? By the ATC server? By the servers which call the API?
Scalability
Being able to provide different URLs and auth tokens would allow the ATC to load balance with different API providers.
Threading of the ATC itself
Would the ATC need to spawn threads to be able to handle each new request?
How does a web server handle requests?
How would the various threads share a common schedule?
In a non-threaded environment, the ATC would possibly keep an associative array in memory to keep performance as high as possible. How would the various threads of the ATC have access to the same schedule?
So here is my question. Does this exist? If not, what are some best practices in trying to build the above?
It seems like a beanstalkd kind of network service except it only provides permission/scheduling and is extremely dependant on timing.

Does Node.js need a job queue?

Say I have a express service which sends email:
app.post('/send', function(req, res) {
sendEmailAsync(req.body).catch(console.error)
res.send('ok')
})
this works.
I'd like to know what's the advantage of introducing a job queue here? like Kue.
Does Node.js need a job queue?
Not generically.
A job queue is to solve a specific problem, usually with more to do than a single node.js process can handle at once so you "queue" up things to do and may even dole them out to other processes to handle.
You may even have priorities for different types of jobs or want to control the rate at which jobs are executed (suppose you have a rate limit cap you have to remain below on some external server or just don't want to overwhelm some other server). One can also use nodejs clustering to increase the amount of tasks that your node server can handle. So, a queue is about controlling the execution of some CPU or resource intensive task when you have more of it to do than your server can easily execute at once. A queue gives you control over the flow of execution.
I don't see any reason for the code you show to use a job queue unless you were doing a lot of these all at once.
The specific https://github.com/OptimalBits/bull library or Kue library you mention lists these features on its NPM page:
Delayed jobs
Distribution of parallel work load
Job event and progress pubsub
Job TTL
Optional retries with backoff
Graceful workers shutdown
Full-text search capabilities
RESTful JSON API
Rich integrated UI
Infinite scrolling
UI progress indication
Job specific logging
So, I think it goes without saying that you'd add a queue if you needed some specific queuing features and you'd use the Kue library if it had the best set of features for your particular problem.
In case it matters, your code is sending res.send("ok") before it finishes with the async tasks and before you know if it succeeded or not. Sometimes there are reasons for doing that, but sometimes you want to communicate back whether the operation was successful or not (which you are not doing).
Basically, the point of a queue would simply be to give you more control over their execution.
This could be for things like throttling how many you send, giving priority to other actions first, evening out the flow (i.e., if 10000 get sent at the same time, you don't try to send all 10000 at the same time and kill your server).
What exactly you use your queue for, and whether it would be of any benefit, depends on your actual situation and use cases. At the end of the day, it's just about controlling the flow.

Java ExecutorService for Async web service

We need to implement a Async web service.
Behaviour of web service:
We send the request for an account to server and it sends back the sync response with an acknowledgement ID. After that we get multiple Callback requests which contains that acknowldegment ID. The last callback request for an acknowledgement ID will contain a text(completed:true) in the response which will tell us that this is the last callback request for that account and acknowledgement ID. This will help us to know that async call for a particular account is completed and we can mark its final status. We need to execute this web service for multiple accounts. So, we will be getting callback requests for many accounts.
Question:
What is the optimal way to process these multiple callback requests coming for multiple accounts.
Solutions that we thought of:
ExecutorService Fixed Thread Pool: This will parallely process our callback requests but the concern is that it does not maintain the sequence. So it will be difficult for us to determine that the last callback request for an acknowledgment ID(account) has come. Hence, we will not be able to mark the final status of that account as completed with surity.
ExecutorService Single Thread Executor: Here, only one thread is there in the pool with an unbouded queue. If we use this then processing will be pretty slow as only one thread will be actually processing.
Please suggest an optimal way to implement requirement both memory and performance wise.
Let's be clear about one thing: HTTP is a blocking, synchronous protocol. Request/response pairs aren't asynch. What you're doing is spawning asynch requests and returning to the caller to let them know the request is being processed (HTTP 200) or not (HTTP 500).
I'm not sure that I know optimal for this situation, but there are other considerations:
Use an ExecutorServiceThreadPool that you can configure. Make sure you have a prefix that lets you distinguish these threads from others.
Add request task to a blocking dequeue and have a pool of consumer threads process them. You can tune the dequeue and the consumer thread pool sizes.
If processing is intensive, send request messages to a queue running on another server. Have a pool of queue listeners process the requests.
You cannot assume that the callbacks will return in a certain order. Don't depend on "last" being "true". You'll have to join all those threads together to know when they're finished.
It sounds like the web service should have a URL that lets users query for status.

Response Order in Node.js?

I've gone through some introductory articles on Node.js and Event Loop and one thing is not clear - if there are multiple concurrent requests then are the responses always sequential in the order the request was made? Say if 20 requests did complete simultaneously then will the 20th response have to wait for the other 19 to be cleared (responded back to the client) ?
Update: What I was wondering is whether this is similar to how multiple setTimeouts get queued?
node.js runs Javascript as single threaded. Thus, only one piece of Javascript is running at any given time.
But, almost all I/O (e.g. networking, file access, etc...) is asynchronous and non-blocking. So, if 20 requests are made of your server in a very short period of time, the first request to reach the server will start executing it's request handler and the other requests will be queued. But, as soon as the first request hits an asynchronous operation (such as reading from the local file system), that request will be suspended while the non-blocking asynchronous I/O is taking place and the next request in line will start to run.
This second request will then run until it either finishes or until it also hits a piece of asynchronous I/O. When that second request is waiting on the async I/O, then another request will get to run. The system scheduler will determine if the next operation is the completion of the async I/O request from the first request or if it will start the third request that was waiting in the queue.
The various requests will continue this way until all are done. Multiple requests may be "in-flight" at the same time (meaning they've been started, but have not completed yet), but only one is ever actually executing code at any given moment.
This is sometimes referred to as cooperative tasking. There is no pre-emptive multi-tasking among the different requests where each automatically gets a time slice of the host CPU. But, any time a request hits an asynchronous I/O operation, then that tells the scheduler that other requests waiting to run can run.
This is all managed from an event queue in node.js. A piece of Javascript runs until it completes. If it makes an asynchronous I/O request and then completes, then another piece of Javascript that is also waiting to run can start to run. When it is done, the JS engine pulls the next item out of the event queue and runs it. That might be a new incoming request or it might be the completion of some asynchronous I/O operation on some other request.
The advantages of this type of system are:
It scales really well, particularly for I/O bound server operations, because you can have many requests "in-flight" at the same time with only a single Javascript thread. The cooperative tasking is very lightweight and fast.
Programming a system like this has far fewer "race conditions" to watch out for because no two pieces of Javascript are ever running at the actual same time. This means you can often share state between requests without ever having to use mutexes (like you would in a multi-thread environment). Since thread-safe bugs are often very difficult to avoid and to test for, it's a major advantage to eliminate these types of bugs.
The cooperative model is conceptually simple and easier to learn and to program safely.
The disadvantages of this type of system are:
It does not share the CPU among tasks that are CPU-bound. A node.js programmer with lots of heavy CPU-bound computations often has to use clustering or child processes to handle the heave CPU computations so as to not over-burden the main request processing Javascript thread with that work and make it too non-responsive.
Clustering of processes is required to maximize the use of multiple processors and then any shared data must be shared across those processes. People often use an in-memory database like Redis to share data between processes.
You can't just willy nilly fire up another Javascript thread to go off and do something.

How, in general, does Node.js handle 10,000 concurrent requests?

I understand that Node.js uses a single-thread and an event loop to process requests only processing one at a time (which is non-blocking). But still, how does that work, lets say 10,000 concurrent requests. The event loop will process all the requests? Would not that take too long?
I can not understand (yet) how it can be faster than a multi-threaded web server. I understand that multi-threaded web server will be more expensive in resources (memory, CPU), but would not it still be faster? I am probably wrong; please explain how this single-thread is faster in lots of requests, and what it typically does (in high level) when servicing lots of requests like 10,000.
And also, will that single-thread scale well with that large amount? Please bear in mind that I am just starting to learn Node.js.
If you have to ask this question then you're probably unfamiliar with what most web applications/services do. You're probably thinking that all software do this:
user do an action
│
v
application start processing action
└──> loop ...
└──> busy processing
end loop
└──> send result to user
However, this is not how web applications, or indeed any application with a database as the back-end, work. Web apps do this:
user do an action
│
v
application start processing action
└──> make database request
└──> do nothing until request completes
request complete
└──> send result to user
In this scenario, the software spend most of its running time using 0% CPU time waiting for the database to return.
Multithreaded network app:
Multithreaded network apps handle the above workload like this:
request ──> spawn thread
└──> wait for database request
└──> answer request
request ──> spawn thread
└──> wait for database request
└──> answer request
request ──> spawn thread
└──> wait for database request
└──> answer request
So the thread spend most of their time using 0% CPU waiting for the database to return data. While doing so they have had to allocate the memory required for a thread which includes a completely separate program stack for each thread etc. Also, they would have to start a thread which while is not as expensive as starting a full process is still not exactly cheap.
Singlethreaded event loop
Since we spend most of our time using 0% CPU, why not run some code when we're not using CPU? That way, each request will still get the same amount of CPU time as multithreaded applications but we don't need to start a thread. So we do this:
request ──> make database request
request ──> make database request
request ──> make database request
database request complete ──> send response
database request complete ──> send response
database request complete ──> send response
In practice both approaches return data with roughly the same latency since it's the database response time that dominates the processing.
The main advantage here is that we don't need to spawn a new thread so we don't need to do lots and lots of malloc which would slow us down.
Magic, invisible threading
The seemingly mysterious thing is how both the approaches above manage to run workload in "parallel"? The answer is that the database is threaded. So our single-threaded app is actually leveraging the multi-threaded behaviour of another process: the database.
Where singlethreaded approach fails
A singlethreaded app fails big if you need to do lots of CPU calculations before returning the data. Now, I don't mean a for loop processing the database result. That's still mostly O(n). What I mean is things like doing Fourier transform (mp3 encoding for example), ray tracing (3D rendering) etc.
Another pitfall of singlethreaded apps is that it will only utilise a single CPU core. So if you have a quad-core server (not uncommon nowdays) you're not using the other 3 cores.
Where multithreaded approach fails
A multithreaded app fails big if you need to allocate lots of RAM per thread. First, the RAM usage itself means you can't handle as many requests as a singlethreaded app. Worse, malloc is slow. Allocating lots and lots of objects (which is common for modern web frameworks) means we can potentially end up being slower than singlethreaded apps. This is where node.js usually win.
One use-case that end up making multithreaded worse is when you need to run another scripting language in your thread. First you usually need to malloc the entire runtime for that language, then you need to malloc the variables used by your script.
So if you're writing network apps in C or go or java then the overhead of threading will usually not be too bad. If you're writing a C web server to serve PHP or Ruby then it's very easy to write a faster server in javascript or Ruby or Python.
Hybrid approach
Some web servers use a hybrid approach. Nginx and Apache2 for example implement their network processing code as a thread pool of event loops. Each thread runs an event loop simultaneously processing requests single-threaded but requests are load-balanced among multiple threads.
Some single-threaded architectures also use a hybrid approach. Instead of launching multiple threads from a single process you can launch multiple applications - for example, 4 node.js servers on a quad-core machine. Then you use a load balancer to spread the workload amongst the processes. The cluster module in node.js does exactly this.
In effect the two approaches are technically identical mirror-images of each other.
What you seem to be thinking is that most of the processing is handled in the node event loop. Node actually farms off the I/O work to threads. I/O operations typically take orders of magnitude longer than CPU operations so why have the CPU wait for that? Besides, the OS can handle I/O tasks very well already. In fact, because Node does not wait around it achieves much higher CPU utilisation.
By way of analogy, think of NodeJS as a waiter taking the customer orders while the I/O chefs prepare them in the kitchen. Other systems have multiple chefs, who take a customers order, prepare the meal, clear the table and only then attend to the next customer.
Single Threaded Event Loop Model Processing Steps:
Clients Send request to Web Server.
Node JS Web Server internally maintains a Limited Thread pool to
provide services to the Client Requests.
Node JS Web Server receives those requests and places them into a
Queue. It is known as “Event Queue”.
Node JS Web Server internally has a Component, known as “Event Loop”.
Why it got this name is that it uses indefinite loop to receive
requests and process them.
Event Loop uses Single Thread only. It is main heart of Node JS
Platform Processing Model.
Event Loop checks any Client Request is placed in Event Queue. If
not then wait for incoming requests for indefinitely.
If yes, then pick up one Client Request from Event Queue
Starts process that Client Request
If that Client Request Does Not requires any Blocking IO
Operations, then process everything, prepare response and send it
back to client.
If that Client Request requires some Blocking IO Operations like
interacting with Database, File System, External Services then it
will follow different approach
Checks Threads availability from Internal Thread Pool
Picks up one Thread and assign this Client Request to that thread.
That Thread is responsible for taking that request, process it,
perform Blocking IO operations, prepare response and send it back
to the Event Loop
very nicely explained by #Rambabu Posa for more explanation go throw this Link
I understand that Node.js uses a single-thread and an event loop to
process requests only processing one at a time (which is non-blocking).
I could be misunderstanding what you've said here, but "one at a time" sounds like you may not be fully understanding the event-based architecture.
In a "conventional" (non event-driven) application architecture, the process spends a lot of time sitting around waiting for something to happen. In an event-based architecture such as Node.js the process doesn't just wait, it can get on with other work.
For example: you get a connection from a client, you accept it, you read the request headers (in the case of http), then you start to act on the request. You might read the request body, you will generally end up sending some data back to the client (this is a deliberate simplification of the procedure, just to demonstrate the point).
At each of these stages, most of the time is spent waiting for some data to arrive from the other end - the actual time spent processing in the main JS thread is usually fairly minimal.
When the state of an I/O object (such as a network connection) changes such that it needs processing (e.g. data is received on a socket, a socket becomes writable, etc) the main Node.js JS thread is woken with a list of items needing to be processed.
It finds the relevant data structure and emits some event on that structure which causes callbacks to be run, which process the incoming data, or write more data to a socket, etc. Once all of the I/O objects in need of processing have been processed, the main Node.js JS thread will wait again until it's told that more data is available (or some other operation has completed or timed out).
The next time that it is woken, it could well be due to a different I/O object needing to be processed - for example a different network connection. Each time, the relevant callbacks are run and then it goes back to sleep waiting for something else to happen.
The important point is that the processing of different requests is interleaved, it doesn't process one request from start to end and then move onto the next.
To my mind, the main advantage of this is that a slow request (e.g. you're trying to send 1MB of response data to a mobile phone device over a 2G data connection, or you're doing a really slow database query) won't block faster ones.
In a conventional multi-threaded web server, you will typically have a thread for each request being handled, and it will process ONLY that request until it's finished. What happens if you have a lot of slow requests? You end up with a lot of your threads hanging around processing these requests, and other requests (which might be very simple requests that could be handled very quickly) get queued behind them.
There are plenty of others event-based systems apart from Node.js, and they tend to have similar advantages and disadvantages compared with the conventional model.
I wouldn't claim that event-based systems are faster in every situation or with every workload - they tend to work well for I/O-bound workloads, not so well for CPU-bound ones.
Adding to slebetman answer:
When you say Node.JS can handle 10,000 concurrent requests they are essentially non-blocking requests i.e. these requests are majorly pertaining to database query.
Internally, event loop of Node.JS is handling a thread pool, where each thread handles a non-blocking request and event loop continues to listen to more request after delegating work to one of the thread of the thread pool. When one of the thread completes the work, it send a signal to the event loop that it has finished aka callback. Event loop then process this callback and send the response back.
As you are new to NodeJS, do read more about nextTick to understand how event loop works internally.
Read blogs on http://javascriptissexy.com, they were really helpful for me when I started with JavaScript/NodeJS.
The blocking part of the multithreaded-blocking system makes it less efficient. The thread which is blocked cannot be used for anything else, while it is waiting for a response.
While a non-blocking single-threaded system makes the best use of its single-thread system.
See diagram below:
Here waiting at kitchen door or waiting while customer is selecting food items, is "Blocking" the full capacity of the waiter. In sense of Compute systems, it could be waiting for IO, or DB response or anything which blocks whole thread, even though the thread is capable of other works while waiting.
Let see how non blocking works:
In a non blocking system, waiter only takes order and serve order, do not waits at anywhere. He shares his mobile number, to give a call back when they have finalzed the order. Similarly he shares his number with Kitchen to callback when order is ready to serve.
This is how Event loop works in NodeJS, and performs better than blocking multithreaded system.
Adding to slebetman's answer for more clarity on what happens while executing the code.
The internal thread pool in nodeJs just has 4 threads by default. and its not like the whole request is attached to a new thread from the thread pool the whole execution of request happens just like any normal request (without any blocking task) , just that whenever a request has any long running or a heavy operation like db call ,a file operation or a http request the task is queued to the internal thread pool which is provided by libuv. And as nodeJs provides 4 threads in internal thread pool by default every 5th or next concurrent request waits until a thread is free and once these operations are over the callback is pushed to the callback queue. and is picked up by event loop and sends back the response.
Now here comes another information that its not once single callback queue, there are many queues.
NextTick queue
Micro task queue
Timers Queue
IO callback queue (Requests, File ops, db ops)
IO Poll queue
Check Phase queue or SetImmediate
close handlers queue
Whenever a request comes the code gets executing in this order of callbacks queued.
It is not like when there is a blocking request it is attached to a new thread. There are only 4 threads by default. So there is another queueing happening there.
Whenever in a code a blocking process like file read occurs , then calls a function which utilises thread from thread pool and then once the operation is done , the callback is passed to the respective queue and then executed in the order.
Everything gets queued based on the the type of callback and processed in the order mentioned above.
Here is a good explanation from this medium article:
Given a NodeJS application, since Node is single threaded, say if processing involves a Promise.all that takes 8 seconds, does this mean that the client request that comes after this request would need to wait for eight seconds?
No. NodeJS event loop is single threaded. The entire server architecture for NodeJS is not single threaded.
Before getting into the Node server architecture, to take a look at typical multithreaded request response model, the web server would have multiple threads and when concurrent requests get to the webserver, the webserver picks threadOne from the threadPool and threadOne processes requestOne and responds to clientOne and when the second request comes in, the web server picks up the second thread from the threadPool and picks up requestTwo and processes it and responds to clientTwo. threadOne is responsible for all kinds of operations that requestOne demanded including doing any blocking IO operations.
The fact that the thread needs to wait for blocking IO operations is what makes it inefficient. With this kind of a model, the webserver is only able to serve as much requests as there are threads in the thread pool.
NodeJS Web Server maintains a limited Thread Pool to provide services to client requests. Multiple clients make multiple requests to the NodeJS server. NodeJS receives these requests and places them into the EventQueue .
NodeJS server has an internal component referred to as the EventLoop which is an infinite loop that receives requests and processes them. This EventLoop is single threaded. In other words, EventLoop is the listener for the EventQueue.
So, we have an event queue where the requests are being placed and we have an event loop listening to these requests in the event queue. What happens next?
The listener(the event loop) processes the request and if it is able to process the request without needing any blocking IO operations, then the event loop would itself process the request and sends the response back to the client by itself.
If the current request uses blocking IO operations, the event loop sees whether there are threads available in the thread pool, picks up one thread from the thread pool and assigns the particular request to the picked thread. That thread does the blocking IO operations and sends the response back to the event loop and once the response gets to the event loop, the event loop sends the response back to the client.
How is NodeJS better than traditional multithreaded request response model?
With traditional multithreaded request/response model, every client gets a different thread where as with NodeJS, the simpler request are all handled directly by the EventLoop. This is an optimization of thread pool resources and there is no overhead of creating the threads for every client request.
In node.js request should be IO bound not CPU bound. It means that each request should not force node.js to do a lot of computations. If there are a lot of computations involved in solving a request then node.js is not a good choice. IO bound has little computation required. most of the time requests are spent in either making a call to a DB or a service.
Node.js has single-threaded event loop but it is just a chef. Behind the scene most of the work is done by the operating system and Libuv ensures the communication from the OS. From the Libuv docs:
In event-driven programming, an application expresses interest in
certain events and respond to them when they occur. The responsibility
of gathering events from the operating system or monitoring other
sources of events is handled by libuv, and the user can register
callbacks to be invoked when an event occurs.
The incoming requests are handled by the Operating system. This is pretty much correct for almost all servers based on request-response model. Incoming network calls are queued in OS Non-blocking IO queue.'Event Loop constantly polls OS IO queue that is how it gets to know about the incoming client request. "Polling" means checking the status of some resource at a regular interval. If there are any incoming requests, evnet loop will take that request, it will execute that synchronously. while executing if there is any async call (i.e setTimeout), it will be put into the callback queue. After the event loop finishes executing sync calls, it can poll the callbacks, if it finds a callback that needs to be executed, it will execute that callback. then it will poll for any incoming request. If you check the node.js docs there is this image:
From docs phase-overview
poll: retrieve new I/O events; execute I/O related callbacks (almost
all with the exception of close callbacks, the ones scheduled by
timers, and setImmediate()); node will block here when appropriate.
So event loop is constantly polling from different queues. If ant request needs to an external call or disk access, this is passed to OS and OS also has 2 different queues for those. As soon as event loop detects that somehting has to be done async, it puts them in a queue. Once it is put in a queue, event-loop will process to the next task.
One thing that to mention here, event loop continuously runs. Only Cpu can move this thread out of CPU, event loop itself will not do it.
From the docs:
The secret to the scalability of Node.js is that it uses a small
number of threads to handle many clients. If Node.js can make do with
fewer threads, then it can spend more of your system's time and memory
working on clients rather than on paying space and time overheads for
threads (memory, context-switching). But because Node.js has only a
few threads, you must structure your application to use them wisely.
Here's a good rule of thumb for keeping your Node.js server speedy:
Node.js is fast when the work associated with each client at any given
time is "small".
Note that small tasks mean IO bound tasks not CPU. Single event loop will handle the client load only if the work for each request is mostly IO work.
Context switch basically means CPU is out of resources so It needs to stop the execution of one process to allow another process to execute. OS first has to evict process1 so it will take this process from CPU and it will save this process in the main memory. Next, OS will restore process2 by loading process control block from memory and it will put it on the CPU for execution. Then process2 will start its execution. Between process1 ended and the process2 started, we have lost some time. Large number of threads can cause a heavily loaded system to spend precious cycles on thread scheduling
and context switching, which adds latency and imposes limits on scalability and throughput.

Resources