Using worker_threads from node 12, is it suitable to establish remote connection within the workers and keep those connection alive ?
I don't mean sharing the socket between the master and the workers like we could do with node cluster and fork.
The idea would be to have pools of secure connections already established within the workers to use if needed.
Let say I have a pool of 10 workers. When a worker is created, some pre-established "TLS" connection are created (streams) to server X,Y amd Z, and the worker is marked as "ready"
Each time that I use a worker to process "heavy" tasks (mapReduce, etc, ) and if I need to post data or get data to/from server X,Y or Z during the process,
I use the appropriate "TLS" connection already established from the pool.
Once the task completed, the result is return to the master and the worker just execute a new/next tasks.
1 ) Do you see any side effect / impact of doing so ?
2 ) would it be better to have the pool of "TLS" connection on the "main thread" (master) . If "remote" data are needed within the workers during the tasks, use the "postMessage" method to communicate with the "master" ( and vice/versa ).
Thanks
Worker Threads do not work for remote connections. However, you can build your own system that would work similar using TLS sockets. In a case of such a system I would definitely recommend keeping these types of connections alive. There is a significant latency in setting up these connections, and having these connections active in memory, will use a minimum amount of resources.
Keep in mind that a system like this has some drawbacks:
You are working with different machines, and each of these machines can have its own set of failure conditions.
You are communicating over a network, connections with remote servers might suddenly drop, for any reason imaginable.
You are increasing the physical distance, this will cause latency.
So keep this in the back of your mind.
Would I recommend building a system like this. It is really hard to determine and it relies on your use case, time and money. You mentioned the cluster nodes are processing 'heavy tasks', and with that I reckon CPU / GPU intensive tasks. So a system like this might be a good solution, however, a simple rest API in front of your processing servers might be good enough. Or maybe even database synchronized servers, that just check the database for tasks to execute.
There are many solutions for the same problem, just have to consider what works best for your project(s).
Related
I implemented a connection pool (poolMin=poolMax=10) with node-oracledb and i saw a difference of up to 100 times especially in case of few users like 10. Really impressive. I also increased UV_THREADPOOL_SIZE like 4 + poolMax. At this point I could not understand somethings.
process.env.UV_THREADPOOL_SIZE = 4 + config.pool.poolMax // Default + Max
NodeJs works as Single Thread (with additional 4 threads those none of them are used for network I/O). So when i use a pool with 10 connections, can Single Thread use all of these connections? Or isn't it Single Thread with these settings anymore? Because i added 10 more to UV_THREADPOOL_SIZE. I would be grateful to anyone who explained this matter.
Btw, I wonder if using a fixed number pool like 10 would cause a problem in case of too many users? For example, if the number of instant users is 500, we can reach 5000 instant users on certain days of the year. Do I need to make a special setting (e.g. pool size 100) for those days or will the default be enough?
Thanks in advace.
When you do something like connection.execute(), that work will handled by a Node.js worker thread until the call completes. And each underlaying Oracle connection can only ever do one 'thing' (like execute, or fetch LOB data) at a time - this is a fundamental (i.e. insurmountable) behavior of Oracle connections.
For node-oracledb you want the number of worker threads to be at least as big as the number of connections in the connection pool, plus some extra for non database work. This allows connections to do their thing without blocking any other connection.
Any use of Promise.all() (and similar constructs) using a single connection should be assessed and considered for rewriting as a simple loop. Prior to node-oracledb 5.2, each of the 'parallel' operations for Promise.all() on a single connection will use a thread but this will be blocked waiting for prior work on the connection to complete, so you might need even more threads available. From 5.2 onwards any 'parallel' operations on a single connection will be queued in the JavaScript layer of node-oracledb and will be executed sequentially, so you will only need a worker thread per connection at most. In either version, using Promise.all() where each unit of work has its own connection is different, and only subject to the one-connection per thread requirements.
Check the node-oracledb documentation Connections, Threads, and Parallelism and Connection Pool Sizing.
Separate to how connections are used, first you have to get a connection. Node-oracledb will queue connection pool requests (e.g. pool.getConnection()) if every connection in the pool is already in use. This provides some resiliency under connection spikes. There are some limits to help real storms: queueMax and queueTimeout. Yes, at peak periods you might need to increase the poolMax value. You can check the pool statistics to see pool behavior. You don't want to make the pool too big - see the doc.
Side note: process.env.UV_THREADPOOL_SIZE doesn't have an effect in Node.js on Windows; the UV_THREADPOOL_SIZE variable must be set before Node.js is started.
In nodejs api doc, it says
The cluster module supports two methods of distributing incoming
connections.
The first one (and the default one on all platforms except Windows),
is the round-robin approach, where the master process listens on a
port, accepts new connections and distributes them across the workers
in a round-robin fashion, with some built-in smarts to avoid
overloading a worker process.
The second approach is where the master process creates the listen
socket and sends it to interested workers. The workers then accept
incoming connections directly.
The second approach should, in theory, give the best performance. In
practice however, distribution tends to be very unbalanced due to
operating system scheduler vagaries. Loads have been observed where
over 70% of all connections ended up in just two processes, out of a
total of eight.
I know PM2 is using the first one, but why it doesn't use the second? Just because of unbalnced distribution? thanks.
The second may add CPU load when every child process is trying to 'grab' the socket master sent.
I'm writing a socket.io based server in Node.js (6.9.0). I am using the builtin cluster module to enable multiple processes. For now, there is only two process: a master and a worker. The master receives the connections and maintains an in-memory global data structure (which the worker can query via IPC). The worker process does the majority of work by handling each incoming connection.
I am finding a hanging condition that I cannot attribute to any internal failure when the server is stressed at 300 concurrent users. Under lower concurrency, I don't see the hanging condition.
I'm enabling all forms of debugging (using the debug module: socket.io:socket, socket.io:client as well as my own custom calls to debug).
The last activity I can see is in socket.io, however, the messages indicate that sockets are closing ("reason client namespace disconnect") due to their own "end of test" cycle. It just seems like incoming connections are not be serviced.
I'm using Artillery.io as the test client.
In the server application, I have handlers for uncaught exceptions and try-catch blocks around everything.
In a prior iteration, I also used cluster, but reversed the responsibilities so that the master process handled the connections (with the worker handling global data). That didn't exhibit the same failure. Not sure if something is wrong with the connection distribution. For that, I have also dumped internalMessage events to monitor the internal workings of cluster.
I am not using any other module for connection distribution or sticky sessions. As there is only a single process handling connections (at this time), it doesn't seem relevant.
I was able to remove the hanging condition by changing the cluster scheduling policy from Round Robin (SCHED_RR) to None, which is OS specific (SCHED_NONE). I can't tell whether this is due to a bug in connection distribution (or something else inherent in the scheduling policy), but this one change seems to prevent the hanging condition.
We have a set of micro-services that I'd like to load test in a manner that is consistent with how they are accessed.
After settling on Locust as my tool of choice, I found out that the TCP connection underpinning has connection pooling because I keep seeing messages like these:
WARNING/requests.packages.urllib3.connectionpool: Connection pool is full, discarding connection:
As I understand it, this message is telling me that it discards a connection from the pool that it manages. I assume that it still creates a new connection, and adds it in the place of the one that it discarded.
Is that what it does?
Does it do this without the connection failing?
I don't think that our micro-services keep any sessions open. The connections are made, from a far end, to our services, which provide a result, and then the connection is closed. So, the test is handling the connections in a way that's different than the services are used. Is there a way to get the requests lib to not use a pool, and go through the work of setting up and tearing down all connections made through it, each time?
Is there any reason why we wouldn't want to test this way?
If it is preferable to test with a connection pool, how should I anticipate the difference in load when it's not done this way in production?
That's correct. Unless you set the urllib3 pool to blocking, it will generate more connections than the pool is configured to hold, as needed, and then will discard them once the request is done.
This often happens when you have more threads using a pool than the number of connections the pool is configured to store. urllib3 takes a maxsize parameter (defaults to 1) which you can set to the number of threads you're running. For requests, you'll need to make a custom adapter to do this. See:
https://stackoverflow.com/a/18845952/187878
https://laike9m.com/blog/requests-secret-pool_connections-and-pool_maxsize,89/
That said, it's merely a warning which some people ignore, so it's not a failure. But if this happens a lot in production, that probably means you should tweak your configuration because creating/discarding new connections all the time is fairly costly.
In general, it's a good idea to re-use connections for this reason.
My suggestions would be in this order:
Re-use connections, or
Increase the number of connections that get pooled to match the number of threads, or
Disable the warning if you'd rather not deal with it.
Can someone explain in detail how the core cluster module works in Node.js?
How the workers are able to listen to a single port?
As far as I know that the master process does the listening, but how it can know which ports to listen since workers are started after the master process? Do they somehow communicate that back to the master by using the child_process.fork communication channel? And if so how the incoming connection to the port is passed from the master to the worker?
Also I'm wondering what logic is used to determine to which worker an incoming connection is passed?
I know this is an old question, but this is now explained at nodejs.org here:
The worker processes are spawned using the child_process.fork method,
so that they can communicate with the parent via IPC and pass server
handles back and forth.
When you call server.listen(...) in a worker, it serializes the
arguments and passes the request to the master process. If the master
process already has a listening server matching the worker's
requirements, then it passes the handle to the worker. If it does not
already have a listening server matching that requirement, then it
will create one, and pass the handle to the worker.
This causes potentially surprising behavior in three edge cases:
server.listen({fd: 7}) -
Because the message is passed to the master,
file descriptor 7 in the parent will be listened on, and the handle
passed to the worker, rather than listening to the worker's idea of
what the number 7 file descriptor references.
server.listen(handle) -
Listening on handles explicitly will cause the
worker to use the supplied handle, rather than talk to the master
process. If the worker already has the handle, then it's presumed that
you know what you are doing.
server.listen(0) -
Normally, this will cause servers to listen on a
random port. However, in a cluster, each worker will receive the same
"random" port each time they do listen(0). In essence, the port is
random the first time, but predictable thereafter. If you want to
listen on a unique port, generate a port number based on the cluster
worker ID.
When multiple processes are all accept()ing on the same underlying
resource, the operating system load-balances across them very
efficiently. There is no routing logic in Node.js, or in your program,
and no shared state between the workers. Therefore, it is important to
design your program such that it does not rely too heavily on
in-memory data objects for things like sessions and login.
Because workers are all separate processes, they can be killed or
re-spawned depending on your program's needs, without affecting other
workers. As long as there are some workers still alive, the server
will continue to accept connections. Node does not automatically
manage the number of workers for you, however. It is your
responsibility to manage the worker pool for your application's needs.
NodeJS uses a round-robin decision to make load balancing between the child processes. It will give the incoming connections to an empty process, based on the RR algorithm.
The children and the parent do not actually share anything, the whole script is executed from the beginning to end, that is the main difference between the normal C fork. Traditional C forked child would continue executing from the instruction where it was left, not the beginning like NodeJS. So If you want to share anything, you need to connect to a cache like MemCache or Redis.
So the code below produces 6 6 6 (no evil means) on the console.
var cluster = require("cluster");
var a = 5;
a++;
console.log(a);
if ( cluster.isMaster){
worker = cluster.fork();
worker = cluster.fork();
}
Here is a blog post that explains this
As an update to #OpenUserX03's answer, nodejs has no longer use system load-balances but use a built in one. from this post:
To fix that Node v0.12 got a new implementation using a round-robin algorithm to distribute the load between workers in a better way. This is the default approach Node uses since then including Node v6.0.0