Redis Node – close all connections - node.js

Is there any command or library in Node that would help me close all of the Redis connections?
I know I can track my connections and try to quit them inside process.on('exit', ...), yet I'd like to have a fallback method that closes all the connections first, before initiating new ones.

Related

Socket transport close in socket version 4 instead of pinged time out in socket version 2

I am working on a socket io app using nodejs and redis cache.
I recently upgraded my socket and socket client to the latest version4. My socket app was running on version2. During a disconnect the socket('disconnect') on manager used to get the error reason pinged timeout in version 2, where as the reason for the same error is changed to transport close in version 4. I have the same ping timeout which is set to the max of 60000. Can anyone help on what I am missing here.
The reason I am concerned on the error reason transport close is because, window or tab close shows the same reason. The app currently close for that reason logging out the user. This was used on top of the beforeUnload close to ensure that the user is removed from the redis cache on disconnect.
can anyone suggest a way to differentiate between the reason of transport close between disconnect and window close.

Should I close my mongoose node.js connection after saving into database?

I have the following code in my app.js which runs on server start (npm start)
mongo.mongoConnect('connection_string', 'users').then((x) => {
console.log('Database connection successful');
app.listen(5000, () => console.log('Server started on port 5000'));
})
.catch(err => {
console.error(err.stack);
process.exit(1);
});
process.on('SIGINT', mongo.mongoDisconnect).on('SIGTERM', mongo.mongoDisconnect);
As you can see I open up SIGINT and SIGTERM for closing my connections upon process.exit
I've been reading a lot about how to deal with database connections in mongo and know that I should just invoke it once and have it across my application.
Does that mean that even after save() method when saving data to mongo followed by POST request, I should not be closing my connection? If I close it, how am I going to invoke it again since the connections happens on app start?
I'm asking it since in PHP I had the practice to always open and close my connection after querying MySql database.
Likewise, does it mean that the connection will close only on server shutdown in other words it will always be present since I do not want to shut down my node.js backend instance?
It is formally correct to open a connection, run a query, and then close the connection, but it is not a good practice, because opening a connection is an "expensive" operation and connections can be reused, which is much more efficient. The main restriction on an open connection is that it can only be used by 1 thread at a time. (More accurately, once a request is sent on a connection, no other requests can be sent on that connection until the response to that request is received.)
If your application is short lived or inherently single threaded, as may be the case when running as a "serverless" function, it may be acceptable to open and close a connection on each request.
While in theory it might be acceptable to open a single connection at the start of the program, keep a global reference to that connection, and reuse it, in practice there are common ways in which a connection becomes unusable that you would have to account for, and handling all the possibilities requires complex code. It gets even more complicated when, as is possible with MongoDB replica sets, you are actually connecting to more than one server and want to retry a command on a second server if the first one fails to respond.
That is why the standard and "best" practice is to use a "connection pool" to manage your database connections. A pool opens a set of network connections to the database, verifies and maintains their health, and dynamically assigns virtual database connections to actual network connections as needed. The pool is implemented in a library that will have received a lot of real world testing and is extremely likely to be better than anything you would write yourself. Connection pools have configuration options that would let you set any behavior you want, including opening a new connection for each request and closing it when done, but offer a wide range of performance enhancing capabilities, such a reusing connections and avoiding the overhead of creating them for each request.
This is why for MongoDB, the standard Node.js client already implements a connection pool. I do not know what mongo.mongoConnect in your code refers to; you said in the title that you are using mongoose but it uses connect, not mongoConnect to connect to the database. In general you should either be using the standard client or a JavaScript ORM library like mongoose. Either of them will take care of the connection management issues for you.
Refer to the documentation for the client/library you use for exactly the right way to use it. In general, you would initialize some kind of client object and store it globally before entering your main application handler. Then you would use this object to handle your database operations, and the object will transparently manage the underlying connections via the pool implementation. In this kind of setup, you would only close the connection when exiting the program, and usually the library takes care of that for you automatically, so you really never need to close the connection.
Thus, when using a MongoDB connection pool in NodeJS, you write your program basically the same way you would as if you just opened a connection at startup and then kept reusing it. The libraries take care of isolating you from all the problems that can arise from actually doing this. You do not need to, and in fact should not, close the connection after a database operation when using standard MongoDB NodeJS libraries.
Note that other connection pool implementations exist that do require you to close the connection. What you do with those pools is reserve (or "check out" or "open") a connection, use it, perhaps for multiple operations, and the release (or "check in" or "close") the connection when you are done. This is probably what you were doing in PHP. It is important to read and follow the documentation for the connection pool library you are using to make sure you are using it correctly.
This may not be the exact answer you are looking for, but it is not a good idea to open a new connection for every request and then close it. It is an overhead because it takes some time (even in milliseconds) to create a new connection.
Instead, you should create a pool of connections and use it in your app.
It's a good idea to close your mongo connection when your process dies or is stopped, but you should not need to close your mongoose connection after every successful query.
If you are instantiating a new mongo connection before each query you shouldn't need to be doing that either. You should just need to do that once when booting up your server.
you have two approaches
1) reopen a connection on every call using middle wares
2) you have to save your's query in node sometime later on execute all it onces

Node clustering with websockets

I have a node cluster where the master responds to http requests.
The server also listens to websocket connections (via socket.io). A client connects to the server via the said websocket. Now the client choses between various games (with each node process handles a game).
The questions I have are the following:
Should I open a new connection for each node process? How to tell the client that he should connect to the exact node process X? (Because the server might handle incoming connection-requests on its on)
Is it possible to pass a socket to a node process, so that there is no need for opening a new connection?
What are the drawbacks if I just use one connection (in the master process) and pass the user messages to the respective node processes and the process messages back to the user? (I feel that it costs a lot of CPU to copy rather big objects when sending messages between the processes)
Is it possible to pass a socket to a node process, so that there is no
need for opening a new connection?
You can send a plain TCP socket to another node process as described in the node.js doc here. The basic idea is this:
const child = require('child_process').fork('child.js');
child.send('socket', socket);
Then, in child.js, you would have this:
process.on('message', (m, socket) => {
if (m === 'socket') {
// you have a socket here
}
});
The 'socket' message identifier can be any message name you choose - it is not special. node.js has code that when you use child.send() and the data you are sending is recognized as a socket, it uses platform-specific interprocess communication to share that socket with the other process.
But, I believe this only works for plain sockets that do not yet have any local state established yet other than the TCP state. I have not tried it with an established webSocket connection myself, but I assume it does not work for that because once a webSocket has higher level state associated with it beyond just the TCP socket (such as encryption keys), there's a problem because the OS will not automatically transfer that state to the new process.
Should I open a new connection for each node process? How to tell the
client that he should connect to the exact node process X? (Because
the server might handle incoming connection-requests on its on)
This is probably the simplest means of getting a socket.io connection to the new process. If you make sure that your new process is listening on a unique port number and that it supports CORS, then you can just take the socket.io connection you already have between the master process and the client and send a message to the client on it that tells the client where to reconnect to (what port number). The client can then contain code to listen for that message and make a connection to that new destination.
What are the drawbacks if I just use one connection (in the master
process) and pass the user messages to the respective node processes
and the process messages back to the user? (I feel that it costs a lot
of CPU to copy rather big objects when sending messages between the
processes)
The drawbacks are as you surmise. Your master process just has to spend CPU energy being the middle man forwarding packets both ways. Whether this extra work is significant to you depends entirely upon the context and has to be determined by measurement.
Here's ome more info I discovered. It appears that if an incoming socket.io connection that arrives on the master is immediately shipped off to a cluster child before the connection establishes its initial socket.io state, then this concept could work for socket.io connections too.
Here's an article on sending a connection to another server with implementation code. This appears to be done immediately at connection time so it should work for an incoming socket.io connection that is destined for a specific cluster. The idea here is that there's sticky assignment to a specific cluster process and all incoming connections of any kind that reach the master are immediately transferred over to the cluster child before they establish any state.

Does Mongodb automatically disconnect when the Node server closes

I just started using MongoDB. One of my confusions is, I hear it is good to have your MongoDB connection open on initiation and re-use that connection throughout your application.
However, should I ever explicitly close the MongoDB connection eventually? Or does MongoDB implicitly close the connection when the Node server goes down?
unless explicitly closed, the connection will be kept open in the event loop until the process terminates. So if you intend your app to maintain an open connection to mongodb throughout its life cycle, then there is no need to explicitly close it. It will happen automatically when the process is terminated.
now if you're writing a command script, then you should close the connection explicitly otherwise the open socket will keep your process from terminating.

socket.io force a disconnect over XHR-polling

I have a client/server application using nodejs on the server and socket.io as the connection mechanism. For reasons relevant to my application I want to have only one active connection per browser, and reject all the connections from other tabs that may be opened later on during the session. This works great with WebSockets, but if WebSockets is not supported by the browser and XHR-polling is used instead, the disconnection never happens, so if the user just refreshes the page, this is not interpreted as a reconnection ( I have a delay for reconnection and session restoring), but as a new tab, which ends in the connection being rejected because the old connection made by this same tab is still active.
I'm looking for a way to effectively end the connection from the client whenever a refresh occurs. I've tried binding to the beforeunload and calling socket.disconnect() on the client side, and also sending a message like socket.emit('force-disconnect') and triggering the disconnect from the server with no success. Am I missing something here? Appreciate your help!
I've read this question and couldn't find it useful for my particular case.
Solved the issue, it turns out it was a bug introduced in socket.io 0.9.5. If you have this issue just update BOTH your server and client-side code to socket.io > 0.9.9 and set the socket.io client-side options sync disconnect on unload to true and you're all set.
Options are set this way:
var socket = io.connect('http://yourdomain.com', {'sync disconnect on unload' : true});
You can also get "Error: xhr poll error" if you run out of open file descriptors available. This is likely to happen during a load test.
Check the current open file descriptor size:
ulimit -n
Increase it to a high number:
ulimit -n 1000000

Resources