nodejs oracle-db multiple DB connection - node.js

I wanted to create two database connection and periodically check the status of database connection. If database 1 fails I want to switch the connection to database 2. Can you give me some pointers please?

The first question is what exactly are you trying to do during the 'failover'. Is the Node.js app doing some work for users, or is it just the monitoring script? Then, next is how are your DBs configured (I'm guessing it's not RAC)?
There are all sorts of 'high availability' options and levels in Oracle, many of which are transparent to the application and are available when you use a connection pool. A few are described in the node-oracledb doc - look at Connections and High Availability. Other things can be configured in a tnsnames.ora file such as connection retries if connection requests fail.
As the most basic level the answer to your question is that you could periodically check whether a query works, or just use connection.ping(). If you go this 'roll your own' route, use a connection pool with size 1 for each DB. Then periodically get the connection from the pool and use it.
If you update your question with details, it would be easier to answer.

Related

TypeORM limit number o created connections with createConnection(...)

I need help to limit the allowed number of connections typeORM can hold in his connectionManager.
Today I have many database, more than 12 thousands that are distributed on some servers, and each request in my application can connect to a different database because each database is related to the user, so for each user requesting something from my API my service runs the createConnection(userParams) but I don't know how to control this connection.
I tried limiting inside the userParams something like
createConnection(...userParams, {extra: connectionLimit: 5})
but it seems this only limit the inner Pool that is created each time. I need a way so I can limit the total number of connections the connectionManager can have.
Basically I want a global pool instead of one for each connection created. Can someone please give me any hints?
It looks like what I wanted to achieve was not possible before typeorm version 0.3.6. On current versions the connectionManager does not exists, thus I'm able to control connections by myself

How many session will create using single pool?

I am using Knex version 0.21.15 npm. my pooling parameter is pool {min: 3 , max:300}.
Oracle is my data base server.
pool Is this pool count or session count?
If it is pool, how many sessions can create using a single pool?
If i run one non transaction query 10 time using knex connection ,how many sessions will create?
And when the created session will cleared from oracle session?
Is there any parameter available to remove the idle session from oracle.?
suggest me please if any.
WARNING: a pool.max value of 300 is far too large. You really don't want the database administrator running your Oracle server to distrust you: that can make your work life much more difficult. And such a large max pool size can bring the Oracle server to its knees.
It's a paradox: often you can get better throughput from a database application by reducing the pool size. That's because many concurrent queries can clog the database system.
The pool object here governs how many connections may be in the pool at once. Each connection is a so-called serially reusable resource. That is, when some part of your nodejs program needs to run a query or series of queries, it grabs a connection from the pool. If no connection is already available in the pool, the pooling stuff in knex opens a new one.
If the number of open connections is already at the pool.max value, the pooling stuff makes that part of your nodejs program wait until some other part of the program finishes using a connection in the pool.
When your part of the nodejs program finishes its queries, it releases the connection back to the pool to be reused when some other part of the program needs it.
This is almost absurdly complex. Why bother? Because it's expensive to open connections and much cheaper to re-use them.
Now to your questions:
pool Is this pool count or session count?
It is a pair of limits (min / max) on the count of connections (sessions) open within the pool at one time.
If it is pool, how many sessions can create using a single pool?
Up to the pool.max value.
If i run one non transaction query 10 time using knex connection ,how many sessions will create?
It depends on concurrency. If your tenth query before the first one completes, you may use ten connections from the pool. But you will most likely use fewer than that.
And when the created session will cleared from oracle session?
As mentioned, the pool keeps up to pool.max connections open. That's why 300 is too many.
Is there any parameter available to remove the idle session from oracle.?
This operation is called "evicting" connections from the pool. knex does not support this. Oracle itself may drop idle connections after a timeout. Ask your DBA about that.
In the meantime, use the knex defaults of pool: {min: 2, max: 10} unless and until you really understand pooling and the required concurrency of your application. max:300 would only be justified under very special circumstances.

Multi-tenant MongoDB + mongo-native driver + connection pooling

We are trying to implement the strategy outlined in the following presentation (slides 13-18) using nodejs/mongo-native driver.
https://www.slideshare.net/mongodb/securing-mongodb-to-serve-an-awsbased-multitenant-securityfanatic-saas-application
In summary:
Create a connection pool to mongodb from node.js.
For every request for a tenant, get a conenction from the pool and "authenticate" it. Use the authenticated conenection to serve the request. After response, return the connection to the pool.
Im able to create a connection pool to mongodb without specifying any database using the mongo-native driver like so:
const client = new MongoClient('mongodb://localhost:27017', { useNewUrlParser: true, poolSize: 10 });
However, in order to get a db object, I need to do the following:
const db = client.db(dbName);
This is where I would like to authenticate the connection, and it AFAICS, this functionality has been deprecated/removed from the more recent mongo drivers, node.js and java.
Going by the presentation, looks like this was possible to do with older versions of the Java driver.
Is it even possible for me to use a single connection pool and authenticate tenants to individual databases using the same connections ?
The alternative we have is to have a connection pool per tenant, which is not attractive to us at this time.
Any help will be appreciated, including reasons why this feature was deprecated/removed.
it's me from the slides!! :) I remember that session, it was fun.
Yeah that doesn't work any more, they killed this magnificent feature like 6 months after we implemented it and we were out with it in Beta at the time. We had to change the way we work..
It's a shame since till this day, in Mongo, "connection" (network stuff, SSL, cluster identification) and authentication are 2 separate actions.
Think about when you run mongo shell, you provide the host, port, replica set if any, and your in, connected! But not authenticated. You can then authenticate to user1, do stuff, and then authenticate to user2 and do stuff only user2 can do. And this is done on the same connection! without going thru the overhead creating the channel again, SSL handshake and so on...
Back then, the driver let us have a connection pool of "blank" connections that we could authenticate at will to the current tenant in context of that current execution thread.
Then they deprecated this capability, I think it was with Mongo 2.4. Now they only supported connections that are authenticated at creation. We asked enterprise support, they didn't say why, but to me it looked like they found this way is not secured, "old" authentication may leak, linger on that "not so blank" reusable connection.
We made a change in our multi-tenancy infra implementation, from a large pool of blank connections to many (small) pools of authenticated connections, a pool per tenant. These pools per tenant can be extremely small, like 3 or 5 connections. This solution scaled nicely to several hundreds of tenants, but to meet thousands of tenants we had to make all kinds of optimizations to create pools as needed, close them after idle time, lazy creation for non-active or dormant tenants, etc. This allowed us to scale even more... We're still looking into solutions and optimizations.
You could always go back to a global pool of authenticated connections to a Mongo user that have access to multiple databases. Yes, you can switch database on that same authenticated connection. You just can't switch authentication..
This is an example of pure Mongo Java driver, we used Spring which provide similar functionality:
MongoClient mongoClient = new MongoClient();
DB cust1db = mongoClient.getDB("cust1");
cust1db.get...
DB cust2db = mongoClient.getDB("cust2");
cust2db.get...
Somewhat related, I would recommend looking at MongoDB encryption at rest, it's an enterprise feature. The only way to encrypt each database (each customer) according to a different key.

PostgreSQL: use same connection or get another from pool?

I have a Node.js script and a PostgreSQL database, and I'll be using a library that maintains a pool of connections to the database.
Say I have a script that queries the database multiple times (not a transaction) at different parts of the script, how do I tell if I should acquire a single connection/client and reuse it throughout*, or acquire a new client from the pool for each query? (Both works but which has better performance?)
*task in the pg-promise library, connect in the node-postgres library.
...
// Acquire connection from pool.
(Database query)
(Non-database-related code)
(Database query)
// Release connection to pool.
...
or
...
// Acquire connection from pool.
(Database query)
// Release connection to pool.
(Non-database-related code)
// Acquire connection from pool.
(Database query)
// Release connection to pool.
...
I am not sure, how the pool you are using works, but normally they should reuse the connections (don't disconnect after use), so you do not need to be concerned with caching connections.
You can use node-postgres module that will make you task easier.
And about your question when to use pool here is the brief answer.
PostgreSQL server can only handle 1 query at a time per connection.
That means if you have 1 global new pg.Client() connected to your
backend your entire app is bottleknecked based on how fast postgres
can respond to queries. It literally will line everything up, queuing
each query. Yeah, it's async and so that's alright...but wouldn't you
rather multiply your throughput by 10x? Use pg.connect set the
pg.defaults.poolSize to something sane (we do 25-100, not sure the
right number yet).
new pg.Client is for when you know what you're doing. When you need a
single long lived client for some reason or need to very carefully
control the life-cycle. A good example of this is when using
LISTEN/NOTIFY. The listening client needs to be around and connected
and not shared so it can properly handle NOTIFY messages. Other
example would be when opening up a 1-off client to kill some hung
stuff or in command line scripts.
here is the link of that module.
Hopefully this will help.
https://github.com/brianc/node-postgres
You can see the documentation over there and about the pooling. Thanks :)
And about closing the pool it provides the callback done which can be called when you want to close that pool.

Connection pool using pg-promise

I'm using Node js and Postgresql and trying to be most efficient in the connections implementation.
I saw that pg-promise is built on top of node-postgres and node-postgres uses pg-pool to manage pooling.
I also read that "more than 100 clients at a time is a very bad thing" (node-postgres).
I'm using pg-promise and wanted to know:
what is the recommended poolSize for a very big load of data.
what happens if poolSize = 100 and the application gets 101 request simultaneously (or even more)?
Does Postgres handles the order and makes the 101 request wait until it can run it?
I'm the author of pg-promise.
I'm using Node js and Postgresql and trying to be most efficient in the connections implementation.
There are several levels of optimization for database communications. The most important of them is to minimize the number of queries per HTTP request, because IO is expensive, so is the connection pool.
If you have to execute more than one query per HTTP request, always use tasks, via method task.
If your task requires a transaction, execute it as a transaction, via method tx.
If you need to do multiple inserts or updates, always use multi-row operations. See Multi-row insert with pg-promise and PostgreSQL multi-row updates in Node.js.
I saw that pg-promise is built on top of node-postgres and node-postgres uses pg-pool to manage pooling.
node-postgres started using pg-pool from version 6.x, while pg-promise remains on version 5.x which uses the internal connection pool implementation. Here's the reason why.
I also read that "more than 100 clients at a time is a very bad thing"
My long practice in this area suggests: If you cannot fit your service into a pool of 20 connections, you will not be saved by going for more connections, you will need to fix your implementation instead. Also, by going over 20 you start putting additional strain on the CPU, and that translates into further slow-down.
what is the recommended poolSize for a very big load of data.
The size of the data got nothing to do with the size of the pool. You typically use just one connection for a single download or upload, no matter how large. Unless your implementation is wrong and you end up using more than one connection, then you need to fix it, if you want your app to be scalable.
what happens if poolSize = 100 and the application gets 101 request simultaneously
It will wait for the next available connection.
See also:
Chaining Queries
Performance Boost
what happens if poolSize = 100 and the application gets 101 request simultaneously (or even more)? Does Postgres handles the order and makes the 101 request wait until it can run it?
Right, the request will be queued. But it's not handled by Postgres itself, but by your app (pg-pool). So whenever you run out of free connections, the app will wait for a connection to release, and then the next pending request will be performed. That's what pools are for.
what is the recommended poolSize for a very big load of data.
It really depends on many factors, and no one will really tell you the exact number. Why not test your app under huge load and see in practise how it performs, and find the bottlenecks.
Also I find the node-postgres documentation quite confusing and misleading on the matter:
Once you get >100 simultaneous requests your web server will attempt to open 100 connections to the PostgreSQL backend and 💥 you'll run out of memory on the PostgreSQL server, your database will become unresponsive, your app will seem to hang, and everything will break. Boooo!
https://github.com/brianc/node-postgres
It's not quite true. If you reach the connection limit at Postgres side, you simply won't be able to establish a new connection until any previous connection is closed. Nothing will break, if you handle this situation in your node app.

Resources