Node: one core, many processes - node.js

I have looked up online and all I seem to find are answers related to the question of "how does Node benefit from running in a multi core cpu?"
But. If you have a machine with just one core, you can only be running one process at any given time. (I am considering task scheduling here). And node uses a single threaded model.
My question: is there any scenario in which it makes sense to run multiple node processes in one core? And if the process is a web server that listens on a port, how can this ever work given that only one process can listen?

My question: is there any scenario in which it makes sense to run
multiple node processes in one core?
Yes, there are some scenarios. See details below.
And if the process is a web server that listens on a port, how can
this ever work given that only one process can listen?
The node.js clustering module creates a scenario where there is one listener on the desired port, but incoming requests are shared among all the clustered processes. More details to follow...
You can use the clustering module to run more than one process that are all configured to handle incoming requests on the same port. If you want to know how incoming requests are shared among the different clustered processes, you can read the explanation in this blog post. In a nutshell, there ends up being only one listener on the desired port and the incoming requests are shared among the various clustered processes.
As to whether you could benefit from more processes than you have cores, the answer is that it depends on what type of benefit you are looking for. If you have a properly written server with all async I/O, then adding more processes than cores will likely not improve your overall throughput (as measured by requests per second that your server could process).
But, if you have any CPU-heavy processing in your requests, having a few more processes may provide a bit fairer scheduling between simultaneous requests because the OS will "share" the CPU among each of your processes. This will likely slightly decrease overall throughput (because of the added overhead of task switching the CPU between processes), but it may make request processing more even when there are multiple requests to be processed together.
If your requests don't have much CPU-heavy processing and are really just waiting for I/O most of the time, then there's probably no benefit to adding more processes than cores.
So, it really depends what you want to optimize for and what your situation is.

Related

Worker threads in express application

I have an express backend application, which listens for http requests from a web application. This backend application is running on AWS ECS Fargate.
So my question is, does it makes sense use multithreading, worker-threads in Node.js, in this backend application? There are both CPU intensive and non-intensive functions in the endpoints. For instance should I just distribute any incoming request right away ,regardless of the intensity of the call, to other threads so that Main thread is never blocked? Or should I only use multithreading on intensive jobs etc.
Any suggestion, pros and cons on this topic is very much appreciated.
Questions like this ultimately only really get answered by profiling the system under representative load. This is because our guestimates about what takes CPU and how much are just rarely very good - this stuff has to be measured to really know.
That said, there are some general design guidelines you could consider:
If you have specific requests that you believe in advance are likely to be CPU heavy, then you may want to consider putting those into a queue that is served by a pool of threads.
If you have specific requests that you believe in advance are really just doing non-blocking I/O, then those don't need to get complicated at all by threading. The main thread can likely handle those just fine.
If only a relatively small fraction of your requests are CPU heavy, the simplest design may be to use the nodejs clustering module and just let it spread your requests out among CPUs figuring that this by itself (without any other major design changes) will get any CPU-bound requests off the main loop. If a high percentage of your requests are CPU-bound and you want to prioritize the non-CPU requests so they are always quick, then you'd probably be better with the thread pool that CPU-heavy requests are passed off to so the non-CPU heavy requests aren't held up by them.
So my question is, does it makes sense use multithreading, worker-threads in Node.js, in this backend application?
Yes, in some circumstances. But, when to add this extra layer of complication really depends upon the specific metrics of CPU usage in your requests.
For instance should I just distribute any incoming request right away ,regardless of the intensity of the call, to other threads so that Main thread is never blocked? Or should I only use multithreading on intensive jobs etc.
This depends upon how you want to prioritize different requests. If you want requests prioritized in a FIFO order, where each one gets to start in order based on when it arrived, regardless of what type of request, then yes you can just distribute all requests to threads. In fact, it's probably easier to just use the clustering module for this because that's what it does. If, on the other hand, you want non-CPU-heavy requests to always run quick and not have to wait behind CPU-heavy requests, then you may want to push only the CPU-heavy requests into a queue that is processed by a thread pool. This, then allows the main thread to work on the non-CPU-heavy requests immediately regardless of how many CPU-heavy requests are currently crunching away in the thread pool.

Nodejs clusters performance improvement?

I was watching this video about Nodejs clusterning where you can spawn off several child process. He said that the parent cluster takes care of the child cluster in a round-robin approach giving a chunk of time to each child process. Why would that be any different than the running a single thread? Unless each child can handle itself I do not see much benefit of doing so. However, in the end he pulls some benchmark that is really good which makes me wonder why do people even use a none clustered app if having clusters improves performance by that much.
This statement: "the parent cluster takes care of the child cluster in a round-robin approach giving a chunk of time to each child process" is not entirely correct.
Each cluster process is a separate OS process. The operating system takes care of sharing the CPU among processes, not the parent process. The parent process takes each incoming http request and splits them among the child processes, but that is the extent of the parent process' involvement.
If there is only one CPU core in the server hardware (not usually the case these days unless you're on a hosting plan that only gives you access to one CPU core), then that single CPU core is shared among all of your processes/threads that are active and wish to be running.
If there is more than one CPU core, then individual processes can be assigned to separate cores and your processes can be running truly in parallel. This is where the biggest advantage comes from with nodejs clustering. Then, you get multiple requests truly running in parallel rather than in a single thread as it would be with a single nodejs process.
However, in the end he pulls some benchmark that is really good which makes me wonder why do people even use a non-clustered app if having clusters improves performance by that much.
Clustering adds some level of complication. For example, if you have any server-side state that is kept in memory within your server process that all incoming requests need access to, then that won't work with clustered processes because all the clustered processes don't have access to the same memory (they each have their own memory). In that case, you typically have to either remove any server-side state or move all the server-side state to it's own process (usually a database) that all the clustered processes can then access. This is very doable, but adds an extra level of complication and has its own performance implications.
In addition clustering takes more memory on your server. It may also require more monitoring and logging to make sure all processes are running healthy.

Is Vibe.D multi-threaded to handle concurrent requests?

I know that Vibe.D implementation is based on Fibers.
But I don't know how high load scenarios are handled by Vibe.D. Is it the scheduler in Vibe.D allocating fibers on multiple threads or just one thread for all fibers?
This consideration is very important because even with the high efficiency of Fibers a lot of CPU time is wasted is no more than one thread is used to attend all incoming requests.
Their front page says yes:
http://vibed.org/
this page has the details
http://vibed.org/features#multi-threading
Distributed processing of incoming connections
The HTTP server (as well as any other TCP based server) can be instructed to process incoming connections across the worker threads of the thread pool instead of in the main thread. For applications that don't need to share state across different connections in the process, this can increase the maximum number of requests per second linearly with the number of cores in the system. This feature is enabled using the HTTPServerOption.distribute or TCPListenOptions.distribute settings.

Controlling the flow of requests without dropping them - NodeJS

I have a simple nodejs webserver running, it:
Accepts requests
Spawns separate thread to perform background processing
Background thread returns results
App responds to client
Using Apache benchmark "ab -r -n 100 -c 10", performing 100 requests with 10 at a time.
Average response time of 5.6 seconds.
My logic for using nodejs is that is typically quite resource efficient, especially when the bulk of the work is being done by another process. Seems like the most lightweight webserver option for this scenario.
The Problem
With 10 concurrent requests my CPU was maxed out, which is no surprise since there is CPU intensive work going on the background.
Scaling horizontally is an easy thing to, although I want to make the most out of each server for obvious reasons.
So how with nodejs, either raw or some framework, how can one keep that under control as to not go overkill on the CPU.
Potential Approach?
Could accepting the request storing it in a db or some persistent storage and having a separate process that uses an async library to process x at a time?
In your potential approach, you're basically describing a queue. You can store incoming messages (jobs) there and have each process get one job at the time, only getting the next one when processing the previous job has finished. You could spawn a number of processes working in parallel, like an amount equal to the number of cores in your system. Spawning more won't help performance, because multiple processes sharing a core will just run slower. Keeping one core free might be preferred to keep the system responsive for administrative tasks.
Many different queues exist. A node-based one using redis for persistence that seems to be well supported is Kue (I have no personal experience using it). I found a tutorial for building an implementation with Kue here. Depending on the software your environment is running in though, another choice might make more sense.
Good luck and have fun!

Seeking tutorials and information on load-balancing between threads

I know the term "Load Balancing" can be very broad, but the subject I'm trying to explain is more specific, and I don't know the proper terminology. What I'm building is a set of Server/Client applications. The server needs to be able to handle a massive amount of data transfer, as well as client connections, so I started looking into multi-threading.
There's essentially 3 ways I can see implementing any sort of threading for the server...
One thread handling all requests (defeats the purpose of a thread if 500 clients are logged in)
One thread per user (which is risky to create 1 thread for each of the 500 clients)
Pool of threads which divide the work evenly for any number of clients (What I'm seeking)
The third one is what I'd like to know. This consists of a setup like this:
Maximum 250 threads running at once
500 clients will not create 500 threads, but share the 250
A Queue of requests will be pending to be passed into a thread
A thread is not tied down to a client, and vice-versa
Server decides which thread to send a request to based on activity (load balance)
I'm currently not seeking any code quite yet, but information on how a setup like this works, and preferably a tutorial to accomplish this in Delphi (XE2). Even a proper word or name to put on this subject would be sufficient so I can do the searching myself.
EDIT
I found it necessary to explain a little about what this will be used for. I will be streaming both commands and images, there will be a double-socket setup where there's one "Main Command Socket" and another "Add-on Image Streaming Socket". So really one connection is 2 socket connections.
Each connection to the server's main socket creates (or re-uses) an object representing all the data needed for that connection, including threads, images, settings, etc. For every connection to the main socket, a streaming socket is also connected. It's not always streaming images, but the command socket is always ready.
The point is that I already have a threading mechanism in my current setup (1 thread per session object) and I'd like to shift that over to a pool-like multithreading environment. The two connections together require a higher-level control over these threads, and I can't rely on something like Indy to keep these synchronized, I'd rather know how things are working than to learn to trust something else to do the work for me.
IOCP server. It's the only high-performance solution. It's essentially asynchronous in user mode, ('overlapped I/O in M$-speak), a pool of threads issue WSARecv, WSASend, AcceptEx calls and then all wait on an IOCP queue for completion records. When something useful happens, a kernel threadpool performs the actual I/O and then queues up the completion records.
You need at least a buffer class and socket class, (and probably others for high-performance - objectPool and pooledObject classes so you can make socket and buffer pools).
500 threads may not be an issue on a server class computer. A blocking TCP thread doesn't do much while it's waiting for the server to respond.
There's nothing stopping you from creating some type of work queue on the server side, served by a limited size pool of threads. A simple thread-safe TList works great as a queue, and you can easily put a message handler on each server thread for notifications.
Still, at some point you may have too much work, or too many threads, for the server to handle. This is usually handled by adding another application server.
To ensure scalability, code for the idea of multiple servers, and you can keep scaling by adding hardware.
There may be some reason to limit the number of actual work threads, such as limiting lock contention on a database, or something similar, however, in general, you distribute work by adding threads, and let the hardware (CPU, redirector, switch, NAS, etc.) schedule the load.
Your implementation is completely tied to the communications components you use. If you use Indy, or anything based on Indy, it is one thread per connection - period! There is no way to change this. Indy will scale to 100's of connections, but not 1000's. Your best hope to use thread pools with your communications components is IOCP, but here your choices are limited by the lack of third-party components. I have done all the investigation before and you can see my question at stackoverflow.com/questions/7150093/scalable-delphi-tcp-server-implementation.
I have a fully working distributed development framework (threading and comms) that has been used in production for over 3 years now across more than a half-dozen separate systems and basically covers everything you have asked so far. The code can be found on the web as well.

Resources