Is there a way to automatically close idle pgadmin processes with sequelize? - node.js

I am running a Node.js (Node v14.15.4) application with Sequelize (v6.6.2) as an ORM connecting to a PostgreSQL database and after several operations, i find that there is about 35 idle processes on my pgadmin dashboard, see below image for reference:
In my index file, i have setup Sequelize like below:
sequelize = new Sequelize(process.env[config.use_env_variable], {
logging: false,
pool: {
max: 15,
min: 0,
acquire: 30000,
idle: 10000,
evict: 10000
}
});
is there something that i am missing here? Because i understand that evict instructs sequelize to remove any idle processes after the specified amount of time.

The connections with application name as "pgAdmin 4 - CONN-*" are used by the query tool of pgAdmin. Check if you have any open instances of query tool.

Related

Node JS Sequelize cannot connect to Azure SQL Database

I'm currently trying to connect a Node JS app to a single database that I created using the Azure SQL Database. In order to connect to the database, I use Sequelize. In order to do that, I set up the firewall to accept my IP address as explained here, and I configured a config.json file like so
"username": "SERVER_ADMIN_NAME#MY_IP_ADDRESS",
"password": "ADMIN_PASSWORD",
"database": "DATABASE_NAME",
"host": "SERVER_NAME",
"port": 1433,
"dialect": "mssql",
"dialectOptions": {
"options": {
"encrypt": true
}
}
However, after running the application it fails to connect to the database and returns the following message
"Cannot open server '211' requested by the login. Client with IP address 'MY_IP_ADDRESS' is not allowed to access the server. To enable access, use the Windows Azure Management Portal or run sp_set_firewall_rule on the master database to create a firewall rule for this IP address or address range. It may take up to five minutes for this change to take effect."
I've already waited for more than 5 minutes but the result is still the same. Now, the first thing that came into my mind was how I provided the values for the config.json file. However, after checking the sys.database_firewall_rules using the following query
SELECT * FROM sys.database_firewall_rules;
The table was EMPTY. From here on I'm not really sure what I'm supposed to do. I was wondering if anybody could point out what I was missing? Thanks in advance!
You should not connect to Azure SQL Database using the IP address because it can change any time.
Could you try a connection like below using tedious driver?
var Sql = require('sequelize');
var sql = new Sql('dbname', 'UserName#server', 'password', {
host: 'server.database.windows.net',
dialect: 'mssql',
driver: 'tedious',
options: {
encrypt: true,
database: 'dbname'
},
port: 1433,
pool: {
max: 5,
min: 0,
idle: 10000
}
});
Make sure you are adding your public IP address not your local IP address to the firewall rules. To verify the firewall rules you have added already, please run the following query:
SELECT * FROM sys.firewall_rules;
The above query shows rules at the server level. You have created your rules at that level.

How to manage connection pool with Knex/generic-pool and Oracle

We use Knex with generic pool as our Query Builder and Pool Manager for our Oracle 11.2 Database.
The problem we are facing is that some time Knex / generic-pool starts to accumulate connections and cant recycle them.
I tried to pass some parameters to Knex / Generic Pool to make them kill connections after some point, but looks like it did not worked out.
Packges version:
Knex: v0.13.0
Oracledb: v1.13.1
Generic Pool: v2.5.4
Knex configuration:
{
client: 'oracledb',
connection: {
user: DB_USER,
password: DB_PASSWORD,
host: `${DB_HOST}:${DB_PORT}`,
database: DB_NAME
},
debug: true,
fetchAsString: ['number', 'clob'],
acquireConnectionTimeout: 843600000,
pool: {
min: 2,
max: 150,
acquireTimeoutMillis: 100000,
evictionRunIntervalMillis: 120000,
maxWaitingClients: 100,
idleTimeoutMillis: 100000
}
}
Openshift print with environment variable DEBUG="Knex:*" showing a lot of clients waiting for connection
Try knex 0.14.2 some pool related problems were fixed in that. Also try to add some debug information when transactions are created/committed/rolled back. Open transactions will take connection from pool and does bot release it until transaction is ended. You can get information about pool and transacions bu running app with DEBUG=knex:* environment variable set.

MongoError: not master

I am trying to connect node.js app to MongoDB having replica set but it's throwing an error when any write operations are performed.
It throws MongoError: not master.
It tries to write on secondary mongo instances.
I have the options as { db: { readPreference: secondaryPreferred } } and passing it to the function MongoClient.connect in the node.js code using native Mongo Driver.
The URL used to connect looks like mongodb://admin:pass#host_one:27017,host_two:27017,host_three:27017/dbName
Any help would be really appreciated.
Did you add in your replicaSet name?
mongodb://admin:pass#host_one:27017,host_two:27017,host_three:27017/dbName?replicaSet=my-replica-set
replicaSet=name
The driver verifies that the name of the replica set it connects to
matches this name. Implies that the hosts given are a seed list, and
the driver will attempt to find all members of the set. No default
value.
If this is not set it will be treated as a standalone node.
Maybe your replica set configuration is not correct.
To check the configuration run the rs.conf() command in your mongo servers. You need to have a mongo host running as primary member.
MongoError: Not master
This error seems like your primary member of replica set is not configured properly.
You can confirm this by entering into mongo shell of the host_one. If mongo shell prompt doesn't show PRIMARY, then it's not configured properly.
Mongo shell prompt of host_two and host_three should show SECONDARY after proper configuration.
Important : Run rs.initiate() on just one and only one mongod instance for the replica set.
You can execute this command on the primary member to make the configuration work properly.
rs.initiate();
cfg = {
_id: 'rs0',
members: [{
_id: 0,
host: 'host_one:27017',
priority: 2
}, {
_id: 1,
host: 'host_two:27017',
priority: 1
}, {
_id: 2,
host: 'host_three:27017',
priority: 1
}]
};
cfg.protocolVersion = 1;
rs.reconfig(cfg, {
force: true
});
Please note that priority value indicates the relative eligibility of a member to become a primary.
Specify higher values to make a member more eligible to become primary, and lower values to make the member less eligible. A member with a priority of 0 is ineligible to become primary.
You can again check your replica set configuration using this command
rs.conf()
Read preference is not applicable to writes. Writes must always be performed on the primary.
You should be connecting to replica set instead of directly to an individual node. See node.js mongodb how to connect to replicaset of mongo servers

How to use database connections pool in Sequelize.js

I need some clarification about what the pool is and what it does. The docs say Sequelize will setup a connection pool on initialization so you should ideally only ever create one instance per database.
var sequelize = new Sequelize('database', 'username', 'password', {
host: 'localhost',
dialect: 'mysql'|'mariadb'|'sqlite'|'postgres'|'mssql',
pool: {
max: 5,
min: 0,
idle: 10000
},
// SQLite only
storage: 'path/to/database.sqlite'
});
When your application needs to retrieve data from the database, it creates a database connection. Creating this connection involves some overhead of time and machine resources for both your application and the database. Many database libraries and ORM's will try to reuse connections when possible, so that they do not incur the overhead of establishing that DB connection over and over again. The pool is the collection of these saved, reusable connections that, in your case, Sequelize pulls from. Your configuration of
pool: {
max: 5,
min: 0,
idle: 10000
}
reflects that your pool should:
Never have more than five open connections (max: 5)
At a minimum, have zero open connections/maintain no minimum number of connections (min: 0)
Remove a connection from the pool after the connection has been idle (not been used) for 10 seconds (idle: 10000)
tl;dr: Pools are a good thing that help with database and overall application performance, but if you are too aggressive with your pool configuration you may impact that overall performance negatively.
pool is draining error
I found this thread in my search for a Sequalize error was giving my node.js app: pool is draining. I could not for the life of me figure it out. So for those who follow in my footsteps:
The issue was that I was closing the database earlier than I thought I was, with the command sequelize.closeConnections(). For some reason, instead of an error like 'the database has been closed`, it was instead giving the obscure error 'pool is draining'.
Seems that you can try to put pool to false to avoid having the pool bing created. Here is the API details table :http://sequelize.readthedocs.org/en/latest/api/sequelize/
[options.pool={}]
Object
Should sequelize use a connection pool.
Default is true

Trouble getting Mongoose to reconnect to nodes and send requests to secondaries

A common connection string for mongoose connecting to a replica set is something like follows
var connection = mongoose.createConnection("mongodb://db_1:27017/client_test,mongodb://db_2:27017/client_test", {
replSet : { rs_name : "rs0", poolSize : 5, socketOptions : { keepAlive : 1 } }
}, function(err) {
if (err) { throw err; }
});
The problem with that is if one of the two hosts is down, then it will fail to connect. If you only specify one host, then no requests end up getting sent to secondaries.
Here's my proof for that claim. If you specify one host, and setup your replica set so that there is one primary and an arbiter and then perform a query such as
myApi.find({}).slaveOk().read("s").exec(function(err, docs) {
console.log(docs)
})
It will return results. Well, since I am specifying "s" (secondary), this query should throw an error because there are no running secondaries. In addition, if you bring the secondary online and then do db.currentOp(true), you will never see any actual queries sent it's way.
The moment you alter the connection string to specify every host then you will see connections go to the secondary. The dilemma is that now, because you had to specify the additional host in the connection string, in the event a secondary was offline, it would fail to connect and we've now lost failover (or the entire point to replica sets)
I can't determine if this is a configuration mistake on my part, a bug in Mongoose, or a conceptual flaw in my understanding of the way replica sets function. From some of the docs, they seem to state that reading from secondaries is basically a bad idea, but the reason for doing so is usually issues with stale data. My issue doesn't have anything to do with stale date, I can't figure out a way to setup the system so that I can get queries to secondaries without losing failover capacity.
1.connection string just defines seed servers, mongodb driver tries to connect to these servers and get information about other servers in replicaSet( by calling rs.status()). You could have replicaSet with 5 nodes, but specify only one in connection string, but driver would be able to find four others if server from connection string is available.
2.My proposal is to use secondaryPreferred instead of just secondary, so that in case there is no secondary available, request would be done to primary.
Ok, I believe I have solved all of my problems. Here is what I learned.
Specify all possible replica nodes in your connection string, otherwise Mongoose will never send requests there. Mongoose has a specific format for this which is different than the node-mongodb-native driver. Example below.
In order to prevent it from hanging forever if one of the nodes is down on bootup you need to specify connectTimeoutMS in the 'replset' options, then it will only wait that long for responses from each nodes on initial connection. If the node comes online at a later date, it will still be available.
The host name entries in your mongodb replica setup need to match the hostname entries in the connection string from your application and all hostnames need to be accessible from all parties (mongo to mongo and application to mongo). In my case I had aliased the hostnames from mongo to mongo as mongo1:27017, mongo2:27017, and mongo3:27017. My application server used a connection string with IPs. Mongoose was attempting to re-initate the connection using the mongo1:27017 hostname (which my application server could not reach) rather than the IP address I specified in the connection string. This resulted in it never re-connecting to a node it lost contact with. It is possible had I used hostnames that the application could reach it still would have worked, but I think it's a best practice to make the connection string and the replica setup identical to remove possibly places for error.
On the mongodb node that you rs.initiate() you might need to update the hostname to be a value that all boxes (other mongodbs and application server can reach). By default it will likely end up with a hostname like localhost, which means something different on each machine. This can be from that boxes mongo shell like so.
Example:
// from mongo shell
conf = rs.conf()
conf.members[0].host = "mongo1:27017"
rs.reconfig(conf)
Final functioning connection string which successfully fails over between nodes, including throwing errors if a query is destined for a secondary but there aren't secondaries.
var connection = mongoose.createConnection("mongodb://mongo1:27017/client_test,mongo2:27017/client_test,mongo3:27017/client_test", {
replset : { rs_name : "rs0", poolSize : 5, socketOptions : { keepAlive : 1, connectTimeoutMS : 1000 } },
}, function(err) {
if (err) { throw err; }
});
Working replica setup
{
"_id" : "rs0",
"version" : 4,
"members" : [
{
"_id" : 0,
"host" : "mongo1:27017"
},
{
"_id" : 1,
"host" : "mongo2:27017"
},
{
"_id" : 2,
"host" : "mongo3:27017",
"arbiterOnly" : true
}
]
}
I had some issue similar to yours while dealing with replica, in my case I had 1 primary node with a priority of 10, 1 secondary priority of 0(for analytics) and an arbiter.
My writes would fail after reconnecting the primary instance and I went through a lot trying to fix it here's the most important thing I learnt:
When my primary is down or unreacheable, there has to be another member eligible to become primary.(At least 2members in my set has to have a priority >= 1).
If I have only arbiters, hidden, or members with a priority of 0,
queries will get stuck even after I reconnect my primary, my client is
unable to complete write queries. Read queries would still work, but
write wouldn't.
This is what I faced with mongoose, even with keepalive, autoreconnect and all the socket and connection timeout MS set.
Hopefully this helps.

Resources