Does NodeJS require a multi cores VPS - node.js

I want to develop a website with Nuxt.js or Next.js in 1 core CPU 2.4Ghz, 1GB RAM.
Can my website run fast as a start?
How many requests per seconds will be available maybe?

Whether a Node application benefits from multiple cores is application dependent.
Generally, if the child process or cluster modules are not involved,
then there is no need to have multiple cores on your system because Node.js will only use one core as the request handler always runs on the same event loop, which runs on a single thread.
How to achieve process concurrency and high throughput:
Because JavaScript execution in Node.js is single-threaded, so a good rule of thumb for keeping your Node server speedy: is to avoid blocking the event loop. You can read about this in the official documentation in my reference below.
Simple Illustration:
Consider a case where each request to a web server takes 50ms to complete and 45ms of that 50ms is database I/O that can be done asynchronously.
Choosing non-blocking asynchronous operations frees up that 45ms per request to handle other requests.
This is a significant difference in your application capacity and processing speed just by choosing to use non-blocking methods instead of blocking methods.
Reference:
https://nodejs.org/en/docs/guides/dont-block-the-event-loop/
https://nodejs.org/en/docs/guides/blocking-vs-non-blocking/
I hope this helps.

Related

How can node use more than 100% CPU if it is single threaded?

I am running an express.js server. When I send some load the cpu usage spikes to over 140%.
I understand that since the system I am running the server on has 4 cores so it can go upto 400% as well.
My question is:
How can node.js application consume more than 100% despite being single threaded?
To improve the performance should I run the server in cluster mode? Currently a lot of requests are in http_request_waiting state.
Although the node is a single-threaded model that efficiently works on a single thread to serve the requests. But, it's underlying IO model is multi-threaded. There are two libuv components that act during the process one is event-pool and the other one is thread-pool and this thread pool is allocated with blocking operations like file reading, database queries, and IO operations.

What is the meaning of I/O intensive in Node.js

I was learning Node.js and also found out that Node.js is best to be used with I/O intensive tasks which confused me a bit. So, after some research I found this statement: "An application that reads and/or writes a large amount of data". So, does it mean that Node.js is best to be used with data, that is, read big data, take necessary data from that and send back to client?
A nodejs application can be architectured just fine to include non-I/O things and is not just suited for big data applications (in fact big data has nothing to do with it at all).
A default, simple implementation of Node.js performs best when your application is not CPU intensive and instead spends most of its time doing I/O (input/output) tasks such as reading/writing to a database, read/writing from files, reading/sending network data and so on. It's not about big data, it's about what does the server spend most of its time doing.
Surprisingly enough (to some) since a web server's primary job is responding to http requests which are usually requests for data, most web servers spend most of their time fetching things, reading and writing things and sending things which are all I/O tasks. In the node.js design, all these I/O tasks happen asynchronously in a non-blocking fashion and they use events to signal when those operations complete. This is where the phrase "event-driven design" comes from when describing node.js. It so happens that this makes node.js very efficient at handling things that involve primarily I/O. This is what a simple implementation of node.js does best. And, it generally does it better than a purely threaded server design that devotes an OS thread to every currently in-flight I/O operation (the original design for many server frameworks).
If you do have CPU intensive things (major calculations, image processing, heavy crypto operations, etc...) and you do them very often or they take very long, then you will be best served if you put those tasks in a Worker Thread or in another process and communicate back and forth between the main process in node.js and this worker to get that CPU-intensive work done. It used to be that node.js didn't have Worker Threads which made this task a little more complicated where you often had to use one or more additional processes (either via clustering or additional dedicated processes) in order to handle this CPU-intensive work, but now you can use Worker Threads which can be a bit more convenient.
For example, I have a server task that requires a very heavy amount of crypto (performing a billion crypto operations). If I put that in the main node.js thread, that essentially blocks the event loop so my server can't process other requests while that heavy duty crypto operation is running which would ruin the responsiveness of my server.
But, I was able to move the crypto work to a worker thread (actually to several worker threads) and then can crunch away on the crypto while my main thread stays nice and lively to handle other, unrelated incoming requests in a timely fashion.
First of all, Big Data has nothing to do with Node.js.
I/O intensive means that the given task often waits for I/O. The best examples for these are file operations, networking.
If the processor has to regularly wait for data to arrive, the task is said to be I/O intensive.
Node.js's asynchronous nature however makes it really good at I/O intensive tasks, as it can keep doing other work while it waits for the data to arrive asynchronously.
For example, if you have 10 clients connected to the server and one of the clients requests for a data or task that is heavy to process, my server should not get stuck or wait until this task is finished as it will cause greater response time to other 9 clients or bad user experience. Rather, server should allow the other 9 clients to request data or task from the server, and when the respective tasks get finished, response should be sent back to clients.
PS: You can study about Event loop in Node.js
What Node.js is great at is serving as the middle layer between clients and data sources, i.e. the inputs and outputs.
The reason Node.js is great at this is in the non-blocking event-driven approach it takes.
For example, when you make a request to a Node.js app that asks for some data from a database, Node.js will request that data and immediately return to other requests without being blocked by the database request.
Once the database sends the data back, Node.js triggers the callback (or resolves the promise) with that data and continues onwards.
There's no race condition between these input and output events because their synchronization is done in a single threaded mechanism called the Event Loop. Only one event gets processed at a time.
We can think of the Event Loop as a single-seat rollercoaster ride in an amusement park that has many lines of people waiting to go on the ride, one by one. When you get to go depends on when you got in a line, how important you are or if a friend saved you a spot but nevertheless only one person at a time will be able to partake.
This non-blocking event-driven approach allows Node.js to very efficiently react to input and output events and process many read/write operations because it's not really doing much processing, the CPU work is quite low. It's just serving as the middle layer between you and the data.
On the other hand, if these events lead to some intense CPU operations, Node.js used to perform quite poorly because the Event Loop can process only one event at a time.
To use the rollercoaster analogy from above, a CPU-intensive task would be as if one person is taking a really long ride while all others have to wait for them to be done.
Newer versions of Node.js did get some tools to allow it do to more than 1 thing at time (parallelism) by using workers. The trick here is that every pool of workers has its own Event Loop which allows applications to move the intense work into a different thread and run it in parallel with the rest of the application. Do note that this will only actually help if you run on a machine with more than 1 core. If your machine has 1 core, no matter what tool you use, you're gonna have a bad time because nothing can actually be done in parallel on a single core machine.
In case of Intensive I/O tasks Majority of the time is spent waiting for network, filesystem and perhaps database I/O to complete. Increasing hard disk speed or network connection improves the overall performance.
In its most basic form Node.js is best suited for this type of computing. All I/O in Node.js is non-blocking and it allows other requests to be served while waiting for a particular read or write to complete.

How to process huge array of objects in nodejs

I want to process array of length around 100 000, without putting too much load on CPU. I researched about streams and stumbled upon highlandjs, but i am unable to make it work.
I also tried using promises and processing in chunks but still it is putting very much load on CPU, program can be slow if needed but should not put load on CPU
With node.js which runs your Javascript as single threaded, if you want your server to be maximally responsive to incoming requests, then you need to remove any CPU intensive code from the main http server process. This means doing CPU intensive work in some other process.
There are a bunch of different approaches to doing this:
Use the child_process module to launch another nodejs app that is purposeful built for doing your CPU intensive work.
Cluster your app so that you have N different processes that can do both CPU intensive work and handle requests.
Create a work queue and a number of worker processes that will handle the CPU intensive work.
Use the newer Worker Threads to move CPU intensive work to separate node.js threads (requires node v12+ for stable, non-experimental version of threads).
If you don't do this CPU intensive work very often, then #1 is probably simplest.
If you need scale for other reasons (like handling lots of incoming requests) and you don't do the CPU intensive stuff very often #2.
If you do the CPU intensive stuff pretty regularly and you want incoming request processing to always have the highest priority and you're willing to allow the CPU intensive stuff to take longer, then #3 (work queue) or #4 (threads) is probably the best and you can tune the number of workers to optimize your result.

Why run one Node.js process per core?

According to https://nodejs.org/api/cluster.html#cluster_cluster, one should run the same number of Node.js processes in parallel as the number of cores on the machine.
The supposed reasoning behind this is that Node.js is single threaded.
However, is this really true? Sure the JavaScript code and the event loop run on one thread but Node also has a worker thread pool. The default number of thread in this pool is 4. So why does it make sense to run one Node process per core?
This article has an extension review on the threading mechanism of node.js, worth a read.
In short, the main point is in plain node.js only a few function calls uses thread pool (DNS and FS calls). Your call mostly runs on the event loop only. So for example if you wrote a web app that each request takes 100ms synchronously, you are bound to 10req/s. Thread pool won't be involved. And to increase throughput on a multicore system is to use other cores.
Then it comes asynchronous or callback functions. While it does give you a sense of parallelization, what really happens is it waits for the async code to finish in background so that event loop can work on another function call. Afterwards, the callback codes still has to run in event loop, therefore all your written code are still ran in the one and only one event loop, thus won't be able to harness multi-core systems' power.
The said document clearly states that Node is single-threaded:
A single instance of Node.js runs in a single thread. To take advantage of multi-core systems, the user will sometimes want to launch a cluster of Node.js processes to handle the load.
This way Node process has a single thread, unless new threads are created with respective APIs like child_process, cluster, native add-ons or several built-in modules that use libuv treadpool:
Asynchronous system APIs are used by Node.js whenever possible, but where they do not exist, libuv's threadpool is used to create asynchronous node APIs based on synchronous system APIs. Node.js APIs that use the threadpool are:
all fs APIs, other than the file watcher APIs and those that are
explicitly synchronous
crypto.pbkdf2()
crypto.randomBytes(), unless it is used without a callback
crypto.randomFill()
dns.lookup()
all zlib APIs, other than those that are explicitly synchronous
A single thread uses 1 CPU core, in order to use available resources to the fullest extent and utilize multicore CPU, there should be several threads, the number of cores is used as a rule of thumb.
If cluster processes occupy 100% CPU and it's known there are other threads or external processes (database service) that would fight over CPU cores with cluster processes, the number of cluster processes can be decreased.

How exactly does NodeJS handle high concurrent requests?

I was trying to understand how nodejs can achieve higher concurrency compared to thread-based approaches such as Servlet servers.
I already know that in nodejs "everything runs in parallel except your code", and also there is a backend thread pool in libuv to handle File IO or database calls which are usually the bottlenecks.
So here is my question: if nodejs uses thread pool to handle database calls, how it can service higher concurrent request than Servlet servers such as Tomcat given that Tomcat can also use NIO backed by epoll/kqueue to achieve high concurrency ?
For example, if there's a 100k concurrent request coming in and each requires database operations, if these 100k request are to be serviced concurrently, with nodejs we still end up creating 100k threads which might cause memory exhaustion as Tomcat does. Yes, the 100k threads is just an imagination because (I know) that nodejs has a fixed thread pool and different operations are queued in the event loop, but with Tomcat it handles things in the same way--we also can configure the thread pool size in Tomcat and it also queues request.
Or, am I wrong to say that "nodejs uses backend thread pool in libuv to handle File IO or database calls"? Does nodejs use epoll/kqueue to handle database io without a separate thread?
I was reading this similar question but still didn't get the answer.
if nodejs uses thread pool to handle database calls
That's a wrong assumption. nodejs will typically use networking to talk to a local database running in a different process or on a different host. Networking in node.js does not use threads of any kind - it uses event driven I/O. What the database does for threads is up to the database and independent of node.js since it would be the same no matter which server environment you were using.
node.js does use a thread pool for local disk access, but high scale applications are usually using a database for the crux of their disk access which run in a separate process and have their own I/O optimizations to handle lots of requests. How a given database does it is up to that implementation, but it will not be using a nodejs thread per request.
I was trying to understand how nodejs can achieve higher concurrency compared to thread-based approaches such as Servlet servers.
The general concept is that a properly written server app in node.js uses async I/O for all I/O (except perhaps startup code that only runs during server startup). This means that it can have a lot of requests in-flight at the same time with only a single Javascript thread while most of them are waiting on some type of I/O. If you're going to have a lot of requests in-flight at the same time, it can be a lot more efficient for the system to do it the node.js way of a single thread where all the requests are cooperatively switched vs. using OS threads where every thread has OS overhead associated with it and every pre-emptive thread switch has OS and CPU overhead associated with it.
In node-js, there is no pre-emptive switching between the active requests. Only one runs at a time and it runs until it either finishes or hits an asychronous operation and has nothing else to do until that async I/O operation completes. At that point, the JS engine goes back to the event queue and picks out an event (probably for one of the other requests). This type of cooperate switching can be significantly faster and more efficient than OS-level threads. There is sometimes a programming cost in that a node.js developer has to code with async I/O in order to take advantage of this which has a learning curve in order to get proficient at writing good, clean code with proper error handling and has a learning curve for debugging it too.
For example, if there's a 100k concurrent request coming in and each requires database operations, if these 100k request are to be serviced concurrently, with nodejs we still end up creating 100k threads which might cause memory exhaustion as Tomcat does.
No, you will not be creating 100k threads. A node.js database interface layer that interfaces between node.js and the actual database code in another process or on another host may be written entirely in node.js (using TCP networking to talk to the database) and introduce no new threads at all or it may have some native code and use a small number of threads for its own native code operations, but it will likely be a small number of threads and nothing even close to one per request.
Or, am I wrong to say that "nodejs uses backend thread pool in libuv to handle File IO or database calls"? Does nodejs use epoll/kqueue to handle database io without a separate thread?
For file I/O, yes it uses a thread pool in libuv. For database calls, no - While the details depend entirely upon the database implementation, usually there is not a thread per database call. The database is typically in another process and the nodejs interface library for the DB either directly uses nodejs TCP to talk to the database (which uses no threads) or it has its own native code add-on that talks to the database which probably uses a small number of threads for its work, but typically not a thread per request.

Resources