node oracledb connection pool refresh - node.js

I have maintained a connection pool to oracle database (cluster) by nodeJS & 'oracledb' module.
From database side, seems that the pool connections are not balanced between two dbs under the cluster.
Any suggestion how to fix that?
Also, do I need to refresh the pool time by time? or just leave the pool there forever?
For point 2, I have 'manually' refresh the pool by creating a new duplicate pool for connection and kill the old one for every 'x' mins. And turns out I encounter an error called: ORA-12549: TNS:operating system resource quota exceeded, may I know why?
Appreciate if you can solve my confusion, thanks.

Related

TypeORM limit number o created connections with createConnection(...)

I need help to limit the allowed number of connections typeORM can hold in his connectionManager.
Today I have many database, more than 12 thousands that are distributed on some servers, and each request in my application can connect to a different database because each database is related to the user, so for each user requesting something from my API my service runs the createConnection(userParams) but I don't know how to control this connection.
I tried limiting inside the userParams something like
createConnection(...userParams, {extra: connectionLimit: 5})
but it seems this only limit the inner Pool that is created each time. I need a way so I can limit the total number of connections the connectionManager can have.
Basically I want a global pool instead of one for each connection created. Can someone please give me any hints?
It looks like what I wanted to achieve was not possible before typeorm version 0.3.6. On current versions the connectionManager does not exists, thus I'm able to control connections by myself

How many session will create using single pool?

I am using Knex version 0.21.15 npm. my pooling parameter is pool {min: 3 , max:300}.
Oracle is my data base server.
pool Is this pool count or session count?
If it is pool, how many sessions can create using a single pool?
If i run one non transaction query 10 time using knex connection ,how many sessions will create?
And when the created session will cleared from oracle session?
Is there any parameter available to remove the idle session from oracle.?
suggest me please if any.
WARNING: a pool.max value of 300 is far too large. You really don't want the database administrator running your Oracle server to distrust you: that can make your work life much more difficult. And such a large max pool size can bring the Oracle server to its knees.
It's a paradox: often you can get better throughput from a database application by reducing the pool size. That's because many concurrent queries can clog the database system.
The pool object here governs how many connections may be in the pool at once. Each connection is a so-called serially reusable resource. That is, when some part of your nodejs program needs to run a query or series of queries, it grabs a connection from the pool. If no connection is already available in the pool, the pooling stuff in knex opens a new one.
If the number of open connections is already at the pool.max value, the pooling stuff makes that part of your nodejs program wait until some other part of the program finishes using a connection in the pool.
When your part of the nodejs program finishes its queries, it releases the connection back to the pool to be reused when some other part of the program needs it.
This is almost absurdly complex. Why bother? Because it's expensive to open connections and much cheaper to re-use them.
Now to your questions:
pool Is this pool count or session count?
It is a pair of limits (min / max) on the count of connections (sessions) open within the pool at one time.
If it is pool, how many sessions can create using a single pool?
Up to the pool.max value.
If i run one non transaction query 10 time using knex connection ,how many sessions will create?
It depends on concurrency. If your tenth query before the first one completes, you may use ten connections from the pool. But you will most likely use fewer than that.
And when the created session will cleared from oracle session?
As mentioned, the pool keeps up to pool.max connections open. That's why 300 is too many.
Is there any parameter available to remove the idle session from oracle.?
This operation is called "evicting" connections from the pool. knex does not support this. Oracle itself may drop idle connections after a timeout. Ask your DBA about that.
In the meantime, use the knex defaults of pool: {min: 2, max: 10} unless and until you really understand pooling and the required concurrency of your application. max:300 would only be justified under very special circumstances.

nodejs oracle-db multiple DB connection

I wanted to create two database connection and periodically check the status of database connection. If database 1 fails I want to switch the connection to database 2. Can you give me some pointers please?
The first question is what exactly are you trying to do during the 'failover'. Is the Node.js app doing some work for users, or is it just the monitoring script? Then, next is how are your DBs configured (I'm guessing it's not RAC)?
There are all sorts of 'high availability' options and levels in Oracle, many of which are transparent to the application and are available when you use a connection pool. A few are described in the node-oracledb doc - look at Connections and High Availability. Other things can be configured in a tnsnames.ora file such as connection retries if connection requests fail.
As the most basic level the answer to your question is that you could periodically check whether a query works, or just use connection.ping(). If you go this 'roll your own' route, use a connection pool with size 1 for each DB. Then periodically get the connection from the pool and use it.
If you update your question with details, it would be easier to answer.

Number of active connections on the server reached to max

I am working with mongodb and nodejs. I have mongodb hosted on Atlas.
My backend had been working perfectly but now it is sometimes getting stuck and when I see the analytics on mongodb atlas it shows maximum number of active connections reached to 100.
Can someone please explain why this is happening? Can I reboot the connections and make it 0?
#Stennie I have used mongoose to connect to database
Here is my configuration file
const mongooseOptions = {
useNewUrlParser: true,
autoReconnect: true,
poolSize: 25,
connectTimeoutMS: 30000,
socketTimeoutMS: 30000
}
exports.register = (server, options, next) => {
defaults = Hoek.applyToDefaults(defaults, options)
if (Mongoose.connection.readyState) {
return next()
}
if (!Mongoose.connection.readyState) {
server.log(`${process.env.NOED_ENV} server connecting to ${defaults.url} ${defaults.url}`)
return Mongoose.connect(defaults.url, mongooseOptions).then(() => {
return next() // call the next item in hapi bootstrap
})
}
}
Assuming your backend is deployed on lambda since serverless tag.
Each invocation will leave a container idle to prevent cold start, or use an existing one if available. You are leaving the connection open to reuse it between invocation, like advertised in best practices.
With a poolSize of 25 (?) and 100 max connections, you should limit your function concurrency to 4.
Reserve concurrency to prevent your function from using all the available concurrency in the region, or from overloading downstream resources.
More reading: https://www.mongodb.com/blog/post/optimizing-aws-lambda-performance-with-mongodb-atlas-and-nodejs
You could try couple of things:
In a serverless environment, as already suggested by #Gabriel Bleu, why have such a high connectionLimit. Serverless environment keeps spawning new containers and stopping as per requests. If multiple instances spawn concurrently, it would exhaust the MongoDB server limit very quickly.
The concept of connectionPool is, x number of connections are established every time from every node (instance). But that does not mean all the connections are automatically released after querying. After completing ALL the DB operation, you should release each connection individual after use: mongoose.connection.close();
Note: Mongoose connection close will close all the connections of connection pool. So ideally, this should be run just before returning the response.
Why are you setting explicity autoReconnect to true. MongoDB driver internally reconnects whenever the connection is lost and certainly is not recommended for short lifespan instances such as serverless containers.
If you are running in cluster mode, to optimize for performance, change the serverUri to replica set URL format: MONGODB_URI=mongodb://<username>:<password>#<hostOne>,<hostTwo>,<hostThree>...&ssl=true&authSource=admin.
There are so many factors affecting the max connection limit. You have mongoDB hosted on Atlas and as you mentioned the backend is lamda means you have a serverless environment.
Serverless environments spawn new container on the new connection and destroy a connection when it's no longer being used. The peak connection shows that there are so many new instances being initialized or so many concurrent requests from the user connection. The best practice is to terminate database connection once it's no longer needed. You can terminate the connection
mongoose.connection.close(); as you have used mongoose. It will release the connection from the connection pool. Rather exhausting the concurrent connection limit, you should release connection once it's idle.
Your configuration forces the database driver to reconnect after the connection is dropped by the database. You are explicitly setting the autoReconnect as true so the driver will quickly instantiate connection request once the connection is dropped. That may affect the concurrent connection limit. You should avoid setting it explicitly.
cluster mode can optimize the requests according to the load, you can change the server uri to the replica of database. it may help to migrate the load.
There is a small initial startup cost of approximately 5 to 10 seconds when the Lambda function is invoked for the first time and the MongoDB client in your AWS Lambda function connects to MongoDB. Connections to a mongos for a sharded cluster are faster than connecting to a replica set. Subsequent connections will be significantly faster for the duration of the lifecycle of the Lambda function. so Each invocation will leave a container idle to prevent cold start or cold boot, or use an existing one if available.
Atlas sets the limit for concurrent incoming connections to a cluster based on the cluster tier. If you try to connect when you are at this limit, MongoDB displays an error stating “connection refused because too many open connections”. You can close any open connections to your cluster not currently in use. scaling down to a higher tier to support more concurrent connections. as mentioned in best practice you may restart the application. To prevent this issue in the future, consider utilizing the maxPoolSize connection string option to limit the number of connections in the connection pool.
Final Solution to this issue is Upgrading to a larger Atlas cluster tier which allows a greater number of connections. if your user base is too large for your current cluster tier.

How can I instrument and log my KnexJS transactions?

I have a serious problem in production causing the application to become unresponsive and output the following error:
Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
A running hypothesis is some operations are holding onto long-running Knex transactions. Enough of them to reach the pool size, basically.
Is there a way to query the KnexJS API for how many pool connections are in use at any one time? Unfortunately since KnexJS occupies the max pool settings from the config, it can be hard to know how many are actually in use. From the postgres end, it seems like KnexJS is idling on all of its connections when they are not in use.
Is there a good way to instrument Knex transaction and transacting with some kind of middleware or hook? Another useful thing is to log the callstack of any transaction (or any longer than, say, 7 seconds). One challenge is I have calls to Knex transaction and transacting throughout my project. Maybe it's a long shot.
Any advice is greatly appreciated.
System Information
KnexJS version: 0.12.6 (we will update in the next month)
Database + version: Postgres 9.6
OS: Heroku Linux (Ubuntu?)
Easiest was to see whats happening on connection pool level is to run knex with DEBUG=knex:* environment variable set, which will print quite a lot debug info whats happening inside knex. Those logs shows for example when connections are fetched from pool and returned to there and every ran query too.
There are couple of global events that you can use to hookup to every query, but there is not any for hooking to transactions. Here is related question where I have written some example code how to actually measure transaction durations with query hooks though: Tracking DB querying time - Bookshelf/knex It probably leaks some memory, so its not very production ready solution, but for your debugging purposes it might be helpful.

Resources