Why "worker_threads" when we have default worker pool? - node.js

I see clear the cluster method as it deploys different whole processes. And I guess the professional programmers made "worker_threads" library for some good reason... but I still need to clear this point for my understanding:
In a normal single threaded process the event loop thread has the aid of the default worker pool to unload its heavy I/O tasks, so the main thread is not blocked.
At the same time, user defined "worker threads" will be used for the same reason with their own event loops and NodeJS instances.
What's the point of spawning those event loop and Nodejs instances when they are not the bottle neck as the libuv is intended to manage to spawn the workers.
Is this meaning that the default worker pool may not be enough? I mean just a quantity matter or concept?

There are two types of operation(call) in Nodejs blocking and non-blocking
non-blocking
Nodejs use Libuv for IO non-blocking operation. Network, file, and DNS IO operations run asynchronously by Libuv. Nodejs use the following scheme:
Asynchronous system APIs are used by Node.js whenever possible, but where they do not exist, Libuv's thread pool is used to create asynchronous node APIs based on synchronous system APIs. Node.js APIs that use the thread pool are:
all fs APIs, other than the file watcher APIs and those that are:
explicitly synchronous asynchronous crypto APIs such as crypto.pbkdf2(),
crypto.scrypt(), crypto.randomBytes(), crypto.randomFill(), crypto.generateKeyPair()
dns.lookup() all zlib *APIs, other than those that are explicitly synchronous.
So we don't have direct access to the Libuv thread pool. We may define our own uses of the thread pool using C++ add-ons.
Blocking calls
Nodejs execute blocking code in the main thread. fs.readfileSync(), compression-algorithm, encrypting data, image-resize, calculating primes for the large range are some examples of blocking operation. Nodejs golden rule is never block event-loop(main thread). We can execute these operations asynchronously by creating child process using cluster module or child-process module. But creating a child process is a heavy task in terms of OS resources and that's why worker-thread was born.
Using worker-thread you can execute blocking javascript code in worker-thread hence unblocking the main thread and you can communicate to parent thread(main thread) via message passing. Worker threads are still lightweight as compared to a child process.
Read more here:
https://nodesource.com/blog/worker-threads-nodejs
https://blog.insiderattack.net/deep-dive-into-worker-threads-in-node-js-e75e10546b11

Related

Can nodejs worker threads be used for executing long running file I/O based javascript code?

I can see that NodeJS is bringing in multi-threading support via its worker threads module. My current assumption (I have not yet explored personally) is that I can offload a long running /cpu intensive operation to these worker threads.
I want to understand the behaviour if this long running piece of code has some intermittent event callbacks or chain of promises. Do these callbacks still execute on the worker threads, or do they get passed on back to the main thread?
If these promises come back to main thread, the advantage of executing the worker thread may be lost.
Can someone clarify?
Update => Some context of the question
I have a http req that initiates some background processing and returns a 202 status. After receiving such request, I am starting a background processing via
setTimeout (function() { // performs long running file read operations.. })
and immediately return a 202 to the caller.
However, I have observed that, during this time while this background operation is going, other http requests are either not being processed, or very very sluggish at the best.
My hypothesis is that this continuous I/O processing of a million+ lines is filling up the event loop with callbacks / promises that the main thread is unable to process other pending I/O tasks such as accepting new requests.
I have explored the nodejs cluster option and this works well, as the long task is delegated to one of the child processes, and other instances of cluster are available to take up additional requests.
But I was thinking that worker threads might solve the same problem, without the overhead of cloning the process.
I assume each worker thread would have its own event loop.
So if you emit an event in a worker thread, only that thread would receive it and trigger the callback. The same for promises, if you create a promise within a worker, it will only be resolved by that worker.
This is supported by their statement in the documentation regarding Class: Worker: Most Node.js APIs are available inside of it (with some exceptions that are not related to event processing).
However they mention this earlier in the docs:
Workers are useful for performing CPU-intensive JavaScript operations; do not use them for I/O, since Node.js’s built-in mechanisms for performing operations asynchronously already treat it more efficiently than Worker threads can.
I think some small scale async code in worker threads would be fine, but having more callbacks/promises would hurt performance. Some benchmarks could shed some light on this.

What role plays the V8 engine in Node.js?

In the past days I've been researching to understand how the Node.js event-based style can handle much more concurrent request than the classic multithreading approach. At the end is all about less memory footprint and context-switchs because Node.js only use a couple of threads (the V8 single thread and a bunch of C++ worker threads plus the main-thread of libuv).
But how can handle a huge number of requests with a few threads, because at the end some thread must be blocked waiting, for example, a database read operation.
I think that the idea is: instead of having both the client thread and the database thread blocked, that only the database thread be blocked and alert the client thread when it ends.
This is how I understand Node.js works.
I've been wondering what gives to Node.js the capability to handle HTTP requests.
Based on what I read until now, I understand that libuv is who does that work:
Handles represent long-lived objects capable of performing certain
operations while active. Some examples: a prepare handle gets its
callback called once every loop iteration when active, and a TCP
server handle get its connection callback called every time there is a
new connection.
So, the thread that is waiting incoming http request is the main-thread of libuv that executes the libuv event loop.
So when we write
const http = require('http');
const hostname = '127.0.0.1';
const port = 1337;
http.createServer((req, res) => {
res.writeHead(200, { 'Content-Type': 'text/plain' });
res.end('Hello World\n');
}).listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
... I'm putting in the libuv a callback that will be executed in the V8 engine when a request comes in?
The order of the events will then be
A TCP packet arrives
The OS creates an event and sends to the event loop
The event loop handles the event and creates a V8 event
If I execute blocking code inside the anonymous function that handles the request I will be blocking the V8 thread.
In order to avoid this I need to execute non-blocking code that will be executed in another thread. I suppose this "another thread" is the main-thread of libuv where
network I/O is always performed in a single thread, each loop’s thread
This thread will not block because uses OS syscalls that are async.
epoll on Linux, kqueue on OSX and other BSDs, event ports on SunOS
and IOCP on Windows
I also suppose that http.request is using libuv for achive this.
Similary, if I need to do some file I/O without blocking the V8 thread
I will use the FileSystem module of Node. This time the libuv main thread can't handle this in a non-blocking way because the OS doesn't offer this feature.
Unlike network I/O, there are no platform-specific file I/O primitives
libuv could rely on, so the current approach is to run blocking file
I/O operations in a thread pool.
In this case a classic thread pool is needed in order to not block the libuv event-loop.
Now, if I need to query a database all the responsability of not block either the V8 thread and the libuv thread is in hands of the driver developer.
If the driver doesn't use the libuv it will block the V8 engine.
Instead, if it uses the libuv but the underlying database doesn't have async capabilities, then it will block a worker thread.
Finally, if the database gives async capabilities it will only block the database thread. (In this case I could avoid libuv at all and call the driver directly from the V8 thread)
If this conclusions describes correctly, although in a simplistic way, the ways that libuv and the V8 works together in Node.js, I can't see the benefits of use V8 because we could do all the work in libuv directly (unless the objective is to give the developer a language that allows write an event-based code in a simpler way).
From what I know the difference is mainly asynchronous I/O. In traditional process-per-request or thread-per-request servers, I/O, most notably network I/O, is traditionally synchronous I/O. Node.js uses fewer threads than Apache or whatever, and it can handle the traffic mostly because it uses asynchronous network I/O.
Node.js needs V8 to actually interpret the JS code and turn it into machine code. Libuv is needed to do real I/O. I do not know much more than that :)
There is an excellent post about node.js V8 engine: How JavaScript works: inside the V8 engine + 5 tips on how to write optimized code. It explained many deep-detailed aspects of the engine and some great recommendations while using it.
Simply put, what V8 engine (and other javascript engines) does is to execute javascript code. However, the V8 engine obtains a high performance execution compared to the others.
V8 translates JavaScript code into more efficient machine code instead
of using an interpreter. It compiles JavaScript code into machine code
at execution by implementing a JIT (Just-In-Time) compiler ...
The I/O is nonblocking and asynchronous via libuv which underlying uses the OS primitives like epoll or similar depending on platform to make i/o non blocking. The Nodejs event loop gets an event queued back when an event on an fd (tcp socket as an example) happens

What kind of operations are handled by nodejs worker threads?

I need some clarification on what exactly are the nodejs worker threads doing.
I found contradicting info on this one. Some people say worker threads handle all IO, others say they handle only blocking posix requests (for which there is no async version).
For example, assume that I am not blocking the main loop myself with some unreasonable processing. I am just invoking functions from available modules and providing the callbacks. I can see that if this requires some blocking or computationally-expensive operation then it is handled to a worker thread. But for some async IO, is it initiated from the main libuv loop? Or is it passed to a worker thread, to be initiated from there?
Also, would a nodejs worker thread ever initiate a blocking (synchroneous) IO operation when the OS supports an async mode to do the same thing? Is it documented anywhere what kind of operations may end up blocking a worker thread for a longer time?
I'm asking this because there is a fixed-size worker pool and I want to avoid making mistakes with it. Thanks.
Network I/O on all platforms is done on the main thread. File I/O is a different story: on Windows it is done truly asynchronously and non-blocking, but on all other platforms synchronous file I/O operations are performed in a thread pool to be async and non-blocking.

How node.js works?

I don't understand several things about nodejs. Every information source says that node.js is more scalable than standard threaded web servers due to the lack of threads locking and context switching, but I wonder, if node.js doesn't use threads how does it handle concurrent requests in parallel? What does event I/O model means?
Your help is much appreciated.
Thanks
Node is completely event-driven. Basically the server consists of one thread processing one event after another.
A new request coming in is one kind of event. The server starts processing it and when there is a blocking IO operation, it does not wait until it completes and instead registers a callback function. The server then immediately starts to process another event (maybe another request). When the IO operation is finished, that is another kind of event, and the server will process it (i.e. continue working on the request) by executing the callback as soon as it has time.
So the server never needs to create additional threads or switch between threads, which means it has very little overhead. If you want to make full use of multiple hardware cores, you just start multiple instances of node.js
Update
At the lowest level (C++ code, not Javascript), there actually are multiple threads in node.js: there is a pool of IO workers whose job it is to receive the IO interrupts and put the corresponding events into the queue to be processed by the main thread. This prevents the main thread from being interrupted.
Although Question is already explained before a long time, I'm putting my thoughts on the same.
Node.js is single threaded JavaScript runtime environment. Basically it's creator Ryan Dahl concern was that parallel processing using multiple threads is not the right way or too complicated.
if Node.js doesn't use threads how does it handle concurrent requests in parallel
Ans: It's completely wrong sentence when you say it doesn't use threads, Node.js use threads but in a smart way. It uses single thread to serve all the HTTP requests & multiple threads in thread pool(in libuv) for handling any blocking operation
Libuv: A library to handle asynchronous I/O.
What does event I/O model means?
Ans: The right term is non-blocking I/O. It almost never blocks as Node.js official site says. When any request goes to node server it never queues the request. It take request and start executing if it's blocking operation then it's been sent to working threads area and registered a callback for the same as soon as code execution get finished, it trigger the same callback and goes to event queue and processed by event loop again after that create response and send to the respective client.
Useful link:
click here
Node JS is a JavaScript runtime environment. Both browser and Node JS run on V8 JavaScript engine. Node JS uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node JS applications uses single threaded event loop architecture to handle concurrent clients. Actually its' main event loop is single threaded but most of the I/O works on separate threads, because the I/O APIs in Node JS are asynchronous/non-blocking by design, in order to accommodate the main event loop. Consider a scenario where we request a backend database for the details of user1 and user2 and then print them on the screen/console. The response to this request takes time, but both of the user data requests can be carried out independently and at the same time. When 100 people connect at once, rather than having different threads, Node will loop over those connections and fire off any events your code should know about. If a connection is new it will tell you .If a connection has sent you data, it will tell you .If the connection isn’t doing anything ,it will skip over it rather than taking up precision CPU time on it. Everything in Node is based on responding to these events. So we can see the result, the CPU stay focused on that one process and doesn’t have a bunch of threads for attention.There is no buffering in Node.JS application it simply output the data in chunks.
Though its been answered , i would like to just share my understandings in simple terms
Nodejs uses a library called Libuv , so this Libuv is written in C
language which uses the concept of threads . These threads are called
as workers and these workers take care of the multiple requests from client.
Parallel processing in nodejs is achieved with the help of 2 concepts
Asynchronous
Non blocking IO

Is NodeJS really Single-Threaded?

Node.js solves "One Thread per Connection Problem" by putting the event-based model at its core, using an event loop instead of threads.
All the expensive I/O operations are always executed asynchronously with a callback that gets executed when the initiated operation completes.
The Observation IF any Operation occurs is handled by multiplexing mechanisms like epoll().
My question is now:
Why doesn't NodeJS block while using the blocking Systemcalls
select/epoll/kqueue?
Or isn't NodeJS single threaded at all, so that a second Thread is
necessary to observe all the I/O-Operations with select/epoll/kqueue?
NodeJS is evented (2nd line from the website), not single-threaded. It internally handles threading needed to do select/epoll/kqueue handling without the user explicitly having to manage that, but that doesn't mean there is no thread usage within it.
No.
When I/O operations are initiated they are delegated to libuv, which manages the request using its own (multi-threaded, asynchronous) environment. libuv announces the completion of I/O operations, allowing any callbacks waiting on this event to be re-introduced to the main V8 thread for execution.
V8 -> Delegate I/O (libuv) -> Thread pool -> Multi threaded async
JavaScript is single threaded, so is event-model. But Node stack is not single-threaded.
Node utilizes V8 engine for concurrency.
No Nodejs in the whole is not single-threaded, but Node-Event loop (which nodeJS heavily uses) is single-threaded
Some of the node framework/Std Lib are not single-threaded

Resources