My application only makes select query every 3 seconds, when I run more than 1 pod with same app db connections get stuck, there are more than 20 active connections.
async test (text) {
const client = new Client(main_db);
await client.connect();
try {
const result = await client.query(text);
return result.rows;
} finally {
await client.end();
}
}
this is method witch using to make queries,issue on git,there contributors told to use this method.
How do I fix this?
If you are using a connection pool, you cannot end connections. You will have to release the connections back to the pool to be re-used.
Refer https://node-postgres.com/api/pool#release-err-error-
If you are creating a new connection for every query fired, you shouldn't be having the above problem. You won't be having any problem on the client side (similar to the one mentioned in the question )if you are not closing the connections. The DB will be overloaded though.
Related
I've been creating a multi tenant app where I've been creating the database connections on the fly as soon as I resolve the tenant database connection string from the request that has just hit the server.
It's working as expected, but the connections keeps adding up and never they are never getting disconnected.
From what I've been reading, it seems like the mongoose.connect manages the connections but the mongoose.createConnection doesn't, I'm not sure if my undestanding is correct here.
I thought on creating my own connection pool with map in memory and use the connection from the map if it already exists in the map, but I'm not sure if this is a good approach.
Does anyone know if there is a npm connection pool package already built for this issue? Or any implementation ideas?
I also thought closing out each connection manually when the request lifecycle ends, but it will affect the performance if I have to connect and disconnect from mongo per each request, instead of using a connection pool.
Here is the part of the code I'm creating the connection, nothing special here because I'm always creating the connection.
// ... Resolve connection string from request
let tentantConn;
try {
// One connection per tenant
tentantConn = await mongoose.createConnection(
decrypt(tenant.dbUrl),
{
useNewUrlParser: true,
useUnifiedTopology: true
});
}catch (e) {
req.log.info({ message: `Unauthorized - Error connecting to tenant database: ${currentHostname}`, error: e.message });
return reply.status(401).send({ message: `Unauthorized - Error connecting to tenant database: ${currentHostname}`, error: e.message });
}
// ...
The connection pool is implemented on the driver level:
https://github.com/mongodb/node-mongodb-native/blob/main/src/cmap/connection_pool.ts
By default it opens 5 connection per server. You can change pool size but you cannot disable it.
Now, terminology is a bit confusing, as a single mongodb server / cluster can have multiple databases. They share the same connection string - same 5 connections from the pool regardless of number of databases.
Assumption your tenants have individual clusters and do connect to different mongodb servers, in order to close these connections you need to explicitly call
await mongoose.connection.close()
Took me few days to be able to get back to this issue, but I was able to tweak my code and now the connection count on mongodb atlas seems to be stable. I'm not super happy to be using a global variable to fix this issue, but it is solving my issue for now.
async function switchTenantConnection(aConnStr, aDbName, aAsyncOpenCallback){
const hasConn = global.connectionPoolTest !== null;
if(!hasConn){
const tentantConn = await getTenantConnectionFromEncryptStr(aConnStr);
if(aAsyncOpenCallback){
tentantConn.once('open', aAsyncOpenCallback);
}
tentantConn.once('disconnected', async function () {
global.connectionPoolTest = null;
});
tentantConn.once('error', async function () {
global.connectionPoolTest = null;
});
global.connectionPoolTest= { dbName: aDbName, connection: tentantConn, createdAt: new Date() };
return tentantConn;
}
return global.connectionPoolTest.connection.useDb(aDbName);
}
I'm using AWS-Lambda and all of my queries are update statement. I want to send an async call to database and then end the AWS-lambda without waiting for the promise to resolve. I want to know if that's possible or not. would also love insights on how the connection to db is establish and persists.
results in connection terminated error
--inside aws lambda
client = new Client("conectinString") // rds-proxy connection
client.connect()
client.query('call the store procedure')
client.end()
This works but is creating too many connections to proxy server
--inside aws lambda
client = new Client("conectinString") // rds-proxy connection
client.connect()
client.query('call the store procedure')
I'm using rds-proxy, is there a way to terminate previous connection or reuse the connection if the connection request is trying to connect to same instance.
this is my first post, please let me know if I have missed something to mention.
You should use pools rather than create more connections.
var mysql = require('mysql');
var pool = mysql.createPool(...);
pool.getConnection(function(err, connection) {
if (err) throw err; // not connected!
// Use the connection
connection.query('SELECT something FROM sometable', function (error, results, fields) {
// When done with the connection, release it.
connection.release();
// Handle error after the release.
if (error) throw error;
// Don't use the connection here, it has been returned to the pool.
});
});
I'm working with elasticsearch-js (NodeJS) and everything works just fine as long as long as ElasticSearch is running. However, I'd like to know that my connection is alive before trying to invoke one of the client's methods. I'm doing things in a bit of synchronous fashion, but only for the purpose of performance testing (e.g., check that I have an empty index to work in, ingest some data, query the data). Looking at a snippet like this :
var elasticClient = new elasticsearch.Client({
host: ((options.host || 'localhost') + ':' + (options.port || '9200'))
});
// Note, I already have promise handling implemented, omitting it for brevity though
var promise = elasticClient.indices.delete({index: "_all"});
/// ...
Is there some mechanism to send in on the client config to fail fast, or some test I can perform on the client to make sure it's open before invoking delete?
Update: 2015-05-22
I'm not sure if this is correct, but perhaps attempting to get client stats is reasonable?
var getStats = elasticClient.nodes.stats();
getStats.then(function(o){
console.log(o);
})
.catch(function(e){
console.log(e);
throw e;
});
Via node-debug, I am seeing the promise rejected when ElasticSearch is down / inaccessible with: "Error: No Living connections". When it does connect, o in my then handler seems to have details about connection state. Would this approach be correct or is there a preferred way to check connection viability?
Getting stats can be a heavy call to simply ensure your client is connected. You should use ping, see 2nd example https://github.com/elastic/elasticsearch-js#examples
We are using ping too, after instantiating elasticsearch-js client connection on start up.
// example from above link
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
host: 'localhost:9200',
log: 'trace'
});
client.ping({
// ping usually has a 3000ms timeout
requestTimeout: Infinity,
// undocumented params are appended to the query string
hello: "elasticsearch!"
}, function (error) {
if (error) {
console.trace('elasticsearch cluster is down!');
} else {
console.log('All is well');
}
});
I'm using the redis-sentinel-client library to manage a connection to a Redis sentinel group. The issue I have is that upon connecting I need to process records which may or may not already be present in the Redis store.
As I have two clients (due to the fact that one is a subscriber) I am not sure the best way to organise my event listeners so that I guarantee that both clients are ready prior to attempting any operations.
At the moment I have the following:
var sentinelSubscriberClient = RedisSentinel.createClient(opts);
var sentinelPublisherClient = RedisSentinel.createClient(opts);
sentinelSubscriberClient.on('ready', function redisSubscriberClientReady() {
sentinelPublisherClient.removeAllListeners('ready');
sentinelPublisherClient.on('ready', function () {
supportedChannels.forEach(function (channel) {
sentinelSubscriberClient.subscribe(channel);
});
// Includes reading + publishing via `sentinelPublisherClient`
processUnprocessed();
});
});
(there are also error listeners but I've removed them to make the code easier to read)
This current approach falls over if the publisher client emits ready before the subscriber client. My question is how can I organise the event listeners so that I can safely call .subscribe() on the subscriber client and various methods (.lrange(), .publish() etc.) of the publisher listener?
Thanks!
Simply move client creation into the ready callback function.
var sentinelSubscriberClient = RedisSentinel.createClient(opts);
var sentinelPublisherClient = null;
sentinelSubscriberClient.on('ready', function redisSubscriberClientReady() {
sentinelPublisherClient = RedisSentinel.createClient(opts);
sentinelPublisherClient.on('ready', function () {
supportedChannels.forEach(function (channel) {
sentinelSubscriberClient.subscribe(channel);
});
// Includes reading + publishing via `sentinelPublisherClient`
processUnprocessed();
});
});
I am writing a program to work with rabbitmq via amqp on heroku.
The part of my program have this code:
console.log( 'APP START' );
//Connect to db and start
global.controllers.db.opendb(dbsettings, function(error,db){
if (!error){
global.db = db;
console.log( 'DB: connection to database established.' );
var con = amqp.createConnection( { url: global.queue.producers.host } );
con.on( 'ready', function() {
console.log( 'mq: producers connection ready.' );
});
}
});
As I understood from documentation I should get only one message upon successful connection to queue service.
Is there any particular reason why my output have a lot of lines containing mq: producers connection ready. like this then?
The amqp-node library automatically reconnects either when the connection is lost or when an error occurs in your code. I can't see anything wrong with your code above, but if any exceptions are thrown in your rabbit-related code (also in other places, such as connecting and subscribing to queues) amqp-node will try to reestablish your connection - and keep getting the same exception and keep retrying.