Similar to tools like Retool, I'm looking to create a tool where developers can connect their own databases and query.
At the moment, to connect and query the respective database, I do the following:
Establish a connection to the DB given the credentials save
Query
Destroy the connection
This is unfortunately slow and takes about 1.5-2s to finish.
Another alternative is to save the connection in memory and reuse it when needed.
Thoughts on how to solve this?
I've gone through enough articles and typeorm official documentation on setting up connection pooling with typeorm and postgressql but couldn't find a solution.
All the articles, I've seen so far explains about adding the max/Poolsize attribute in orm configuration or connection pooling but this is not setting up a pool of idle connections in the database.
When I verify pg_stat_activity table after the application bootstraps, I could not see any idle connections in the DB but when a request is sent to the application I could see an active connection to the DB
The max/poolSize attribute defined under the extras in the orm configuration merely acts as the max number of connections that can be opened from the application to the db concurrently.
What I'm expecting is that during the bootstrap, the application opens a predefined number of connections with the database and keep it in idle state. When a request comes into the application one of the idle connection is picked up and the request is served.
Can anyone provide your insights on how to have this configuration defined with typeorm and postgresql?
TypeORM uses node-postgres which has built in pg-pool and doesn't have that kind of option, as far as I can tell. It supports a max, and as your app needs more connections it will create them, so if you want to pre-warm it, or maybe load/stress test it, and see those additional connections you'll need to write some code that kicks off a bunch of async queries/inserts.
I think I understand what you're looking for as I used to do enterprise Java, and connection pools in things like glassfish and jboss have more options where you can keep hot unused connections in the pool. There are no such options in TypeORM/node-postgres though.
I have the following code in my app.js which runs on server start (npm start)
mongo.mongoConnect('connection_string', 'users').then((x) => {
console.log('Database connection successful');
app.listen(5000, () => console.log('Server started on port 5000'));
})
.catch(err => {
console.error(err.stack);
process.exit(1);
});
process.on('SIGINT', mongo.mongoDisconnect).on('SIGTERM', mongo.mongoDisconnect);
As you can see I open up SIGINT and SIGTERM for closing my connections upon process.exit
I've been reading a lot about how to deal with database connections in mongo and know that I should just invoke it once and have it across my application.
Does that mean that even after save() method when saving data to mongo followed by POST request, I should not be closing my connection? If I close it, how am I going to invoke it again since the connections happens on app start?
I'm asking it since in PHP I had the practice to always open and close my connection after querying MySql database.
Likewise, does it mean that the connection will close only on server shutdown in other words it will always be present since I do not want to shut down my node.js backend instance?
It is formally correct to open a connection, run a query, and then close the connection, but it is not a good practice, because opening a connection is an "expensive" operation and connections can be reused, which is much more efficient. The main restriction on an open connection is that it can only be used by 1 thread at a time. (More accurately, once a request is sent on a connection, no other requests can be sent on that connection until the response to that request is received.)
If your application is short lived or inherently single threaded, as may be the case when running as a "serverless" function, it may be acceptable to open and close a connection on each request.
While in theory it might be acceptable to open a single connection at the start of the program, keep a global reference to that connection, and reuse it, in practice there are common ways in which a connection becomes unusable that you would have to account for, and handling all the possibilities requires complex code. It gets even more complicated when, as is possible with MongoDB replica sets, you are actually connecting to more than one server and want to retry a command on a second server if the first one fails to respond.
That is why the standard and "best" practice is to use a "connection pool" to manage your database connections. A pool opens a set of network connections to the database, verifies and maintains their health, and dynamically assigns virtual database connections to actual network connections as needed. The pool is implemented in a library that will have received a lot of real world testing and is extremely likely to be better than anything you would write yourself. Connection pools have configuration options that would let you set any behavior you want, including opening a new connection for each request and closing it when done, but offer a wide range of performance enhancing capabilities, such a reusing connections and avoiding the overhead of creating them for each request.
This is why for MongoDB, the standard Node.js client already implements a connection pool. I do not know what mongo.mongoConnect in your code refers to; you said in the title that you are using mongoose but it uses connect, not mongoConnect to connect to the database. In general you should either be using the standard client or a JavaScript ORM library like mongoose. Either of them will take care of the connection management issues for you.
Refer to the documentation for the client/library you use for exactly the right way to use it. In general, you would initialize some kind of client object and store it globally before entering your main application handler. Then you would use this object to handle your database operations, and the object will transparently manage the underlying connections via the pool implementation. In this kind of setup, you would only close the connection when exiting the program, and usually the library takes care of that for you automatically, so you really never need to close the connection.
Thus, when using a MongoDB connection pool in NodeJS, you write your program basically the same way you would as if you just opened a connection at startup and then kept reusing it. The libraries take care of isolating you from all the problems that can arise from actually doing this. You do not need to, and in fact should not, close the connection after a database operation when using standard MongoDB NodeJS libraries.
Note that other connection pool implementations exist that do require you to close the connection. What you do with those pools is reserve (or "check out" or "open") a connection, use it, perhaps for multiple operations, and the release (or "check in" or "close") the connection when you are done. This is probably what you were doing in PHP. It is important to read and follow the documentation for the connection pool library you are using to make sure you are using it correctly.
This may not be the exact answer you are looking for, but it is not a good idea to open a new connection for every request and then close it. It is an overhead because it takes some time (even in milliseconds) to create a new connection.
Instead, you should create a pool of connections and use it in your app.
It's a good idea to close your mongo connection when your process dies or is stopped, but you should not need to close your mongoose connection after every successful query.
If you are instantiating a new mongo connection before each query you shouldn't need to be doing that either. You should just need to do that once when booting up your server.
you have two approaches
1) reopen a connection on every call using middle wares
2) you have to save your's query in node sometime later on execute all it onces
in my application I have a default database and other database I have to connect to in function of client's requests , since with mongoose in node as far as I understood: there is a pool of connections application wide, if I change database, it is changed for all the subsequent requests, I think it could cause some problems, what is the best way to switch Database with mongoose?
Mongoose 3.7.1 (unstable) supports switching databases.
Otherwise you'll need to create separate connection instances for each database.
I'm using socket.io to handle realtime connections on my site. When a client connects, though, I save out the connection to a mysql database along with other options like what url they are on. I'm using the socket.id as the primary key for that mySQL table as I assumed that would be unique. The purpose of the mySQL database is to make it easier to perform sorting later on. If there's a better (in-node) way to do that I'm all ears.
When the socket disconnect I remove the row from the mySQL table.
However, I seem to be getting some asynchronous issues here that I can't figure out a workaround. Since the Insert into mysql is asynchronous, it seems possible that a user may disconnect BEFORE mysql has inserted into the database. At which point the internals of socket.io and the mySQL database will be out of synch.
How would you go about keeping the two in synch? Is it worth BLOCKING on the mySQL Insert? Can node even do that? Or, is there a different approach?