postgresql and node with redis still making connection to db pool? - node.js

I'm a bit stuck here and was hoping to get some help.
My node application has a seperate module where I connect to postgres and export the pool as so
const {Pool,Client} = require('pg');
const pool = new Pool({
user: process.env.POSTGRES_USER,
host: process.env.POSTGRES_URL,
database: process.env.POSTGRES_DATABASE,
password: process.env.POSTGRES_PASSWORD,
port: process.env.POSTGRES_PORT,
keepAlive: 0,
ssl:{ rejectUnauthorized: false,
sslmode:require},
connectionTimeoutMillis: 10000, // 10 seconds
allowExitOnIdle:true,
max: 10
});
pool.connect()
.then(() => console.log('postgress connected'))
.catch(err => console.error(err))
module.exports = pool
On my route, I have redis cache as middleware, this works as expected and can confirm it is being served up by redis, the logic in the route does not run when the request is cached, however I was doing some load testing to see how everything would handle spikes and noticed I started to get errors from postgres
Error: timeout exceeded when trying to connect
I also got errors talking about max connections etc.
I have tried to increase the max pool connection but still seem to get this error when running some larger load tests.
My question is, why, would PG be trying to connect if the connection should be shared? Additionally, why is it even trying to connect if the request is cached?
Any help would be appreciated!

Apparently some of your stress test cases are missing the redis cache. You haven't shown any code relevant to that, so what more can be said?
The error you show is not generated by PostgreSQL, it is generated by node's 'pg' module. You configured it to only allow 10 simultaneous connections. If more than that are requested, they have to line up and wait. And you also configured it to wait only for 10 seconds before bombing out with an error, and that is exactly what you are seeing.
You vaguely allude to other errors, but you would have to share the actual error message with us if you want help.
The system seems to be operating as designed. You did a stress test to see what would happen, and you have seen what happens.

Related

Quickly using up all connections on postgresql in Node.js

I am using an app on GCP with Node.js with Postgresql (Cloud SQL, lowest tier i.e. 25 connections) using the 'pg' package ("pg": "^8.7.3",). I am quite new with this configuration so there may be some very basic errors here.
I configure my pg_client like this
// CLOUD SQL POSTGRESQL DATABASE
const { Client, Pool } = require('pg')
const pg_client = new Pool({
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DB,
password: process.env.PG_PWD,
port: 5432,
})
and then, in order to copy the data from a nosql-database with some 50.000+ items I go through them pretty much like this. I know the code doesn't make perfect sense but this is how the SQL calls are being made:
fiftyThoussandOldItems.forEach(async (item) => {
let nameId = await pg_client.query("SELECT id from some1000items where name='John'")
pg_client.query("INSERT into items (id, name, url) VALUES (nameId, 1,2)"
})
This does however quickly render sorry, too many clients already :: proc.c:362 and error: remaining connection slots are reserved for non-replication superuser connections.
I have done similar runs before without experiencing this issue (but then with about 1000 items).
As far as I understand, I do Not need to do a pg_client.connect() and pg_client.release() (or is it .end()) any longer, according to a SO-answer I unfortunately can't find any longer. Is this really correct? (When I tried to before, I ended up with a lot of other issues that causes other types of problems)
So, my questions are:
What am I doing wrong? Do I need to use pg_client.connect() before every SQL-call and then pg_client.release() after every SQL-call? Or is it pg_client.end()?
Is there a way to have this automatically handled? It doesn't seem very DRY and bug prone.

Mongoose connection to replica set not working

I am running my own MongoDb Replica Set on Kubernetes.
It has 3 members, I exposed them all via NodePort.
I can connect to it via shell:
(feel free to connect, it's an empty, isolated example that will be destroyed)
mongo mongodb://stackoverflow:practice#134.122.99.184:31064,134.122.99.184:31086,134.122.99.184:32754/thirty3?authSource=admin
However, I cannot connect to it via mongoose 5.11.12 using the same connection string.
It only works until mongoose 4.5.8
mongoose.connect("mongodb://stackoverflow:practice#134.122.99.184:31064,134.122.99.184:31086,134.122.99.184:32754/thirty3?authSource=admin&replicaSet=thirty3&?retryWrites=true&w=majority",
{
useNewUrlParser: true,
poolSize: 5,
useUnifiedTopology: true,
serverSelectionTimeoutMS: 5000, // Timeout after 5s instead of 30s
})
I tried tons of configurations, gssapiServiceName=mongodb, replicaSetName=thirty3 (I checked the replica set name by running rs.conf() ) and many more other configurations.
My question is - is there something wrong with mongoose handling these types of communications?
I have found similar issues that indicate downgrading as a solution, but downgrading is not ideal unless impossible to fix it normally.
Please try the code samples above, the database is open for connections with the credentials exposed.
This configuration works for me in Local MongoDB with a replica set.
await mongoose.connect("mongodb://localhost:27017/movieku", { family: 4 })
Source: https://docs.w3cub.com/mongoose/connections

After many successive and successful queries SQL Server starts refusing connections

I have a GraphQL server which uses typeorm to connect to a SQL Server, also it runs inside a docker container, connection parameters are:
let connection = await createConnection({
type: "mssql",
host: HOST,
port: PORT,
database: DB,
username: USER,
password: PASSWORD,
// NOTE: Use typeorm-model-generator for Entities
logging: false,
synchronize: true,
requestTimeout: 300000,
connectionTimeout: 300000,
entities: ["./models/mssql"],
pool: {
max: 1000, min: 1,
idleTimeoutMillis: 3000,
evictionRunIntervalMillis: 1500000,
},
options: {
encrypt: false,
}
})
Problem:
After numerous sucessful queries the SQL Server starts refusing connections with the error:
'ConnectionError: Failed to connect to x.x.x.x:port - connect ECONNREFUSED x.x.x.x:port'
The problem happens when query a specific type which looking down the hierarchy has a lot of resolvers, also the amount of data varies.
The more data the more likely that the problem will occur.
What also interesting is that it is more likely that the problem will occur if the program is running inside a container or as PM2 service.
I've tested the same data where connections are refused inside a container and not refused in VS Code debugger or when run from terminal.
I tried fiddling with the pool options which gave modest results.
I've also checked that i do not exceed the amount of connections allowed by the DB.
I've checked the logs on the SQL Server, there are no issues or errors in them at the time I make requests.
Edit:
I
I've just now added support for tracking log attempts on the SQL Server and it now registers some 776 successful logins from this application in period 15:03:46.01 - 15:04:48.07 and after that I assume the I get the errors
II
I've tried changing the Network Packet Size option in Sql Server options and packetSize connection options of typeorm -> mssql -> tedious which didn't work but revealed some new details. Since I had to reset the server apply new Sql Options, immediately after the reset the queries successfully finished.
So I tried tracking the resources and saw that the server ramps up to 100% processor capacity pretty quickly and after that has problems opening new connections and if I allow a too large max in pool options (in my case 500) of connections it start refusing connections.
When I bring down the max property, however, a new type of error arises which manifests as connection timed out from unknown reason this time stemming from tarn.js.
III
What I think is happening is that the tarm.js(underlying connection pool) has a default timeout for createTimeoutMillis/ adding new connections to pool, and this option is not revealed through the typeorm API and while it waits on the server to unfreeze and allow a new connection to be allotted it times out.

Node redis ETIMEOUT issue

I've been using node-redis for a while and so far so good. However, upon setting up a new environment, I had a typo on the hostname (or password) and it wouldn't connect. But because this was an already working application I developed some time ago, it was kind of hard to track the actual issue. When you made requests against this server, it would just take up to the server's timeout which was 5 minutes and come back with error 500.
At the end I found out that it was the credentials for the redis server. I use redis to make my app faster by preventing revalidating security tokens for up to an hour (since the validation process can take up to 2000ms), so I store the token on redis for future requests.
This has worked fine for years, however, just because this time I had a typo on the hostname or password, I noticed that if the redis server can't connect (for whaterver reason) the whole application goes down. The idea is that redis should be used if available, if not it should fallback to just take the long route but fulfill the request anyway.
So my question is, how to tell node-redis to throw an error as soon as possible, and not wait until ETIMEOUT error comes?
For example:
const client = redis.createClient(6380, "redis.host.com", { password: "verystrongone" } });
client.on("error", err => {
console.log(err)
})
Based on this code, I get the console.log error AFTER it reaches timeout (around 30-40 seconds). This is not good, because then my application is AT LEAST 30 seconds unresponsive. What I want to achieve is that if the redis is down or something, it should just give up after 2-5 seconds. I use a very fast and reliable redis server from Azure. It takes less than a second to connect, and has never failed, I believe, but if it does, it will take the whole application with it.
I tried stuff like retry_strategy but I believe that option kicks in only after the initial ~30 seconds attempt.
Any suggestions?
So here's an interesting thing I observed.
When I connect to the redis cache instance using the following options, I am able to reproduce the error you're getting.
port: 6380,
host: myaccount.redis.cache.windows.net,
auth_pass: mysupersecretaccountkey
When I specify incorrect password, I get an error after 1 minute.
However, if I specify tls parameter I get an error almost instantaneously:
port: 6380,
host: myaccount.redis.cache.windows.net,
auth_pass: mysupersecretaccountkey,
tls: {
servername: myaccount.redis.cache.windows.net
}
Can you try with tls option?
I am still not able to reproduce the error if I specify incorrect account name. I get the following error almost instantaneously:
Redis connection to
myincorrectaccountname.redis.cache.windows.net:6380 failed -
getaddrinfo ENOTFOUND myincorrectaccountname.redis.cache.windows.net

Does mongoDB have reconnect issues or am i doing it wrong?

I'm using nodejs and a mongoDB - and I'm having some connection issues.
Well, actually "wake" issues! It connects perfectly well - is super fast and I'm generally happy with the results.
My problem: If i don't use the connection for a while (i say while, because the timeframe varies 5+ mins) it seems to stall. I don't get disconnection events fired - it just hangs.
Eventually i get a response like Error: failed to connect to [ * .mongolab.com: * ] - ( * = masked values)
A quick restart of the app, and the connection's great again. Sometimes, if i don't restart the app, i can refresh and it reconnects happily.
This is why i think it is "wake" issues.
Rough outline of code:
I've not included the code - I don't think it's needed. It works (apart from the connection dropout)
Things to note: There is just the one "connect" - i never close it. I never reopen.
I'm using mongoose, socketio.
/* constants */
var mongoConnect = 'myworkingconnectionstring-includingDBname';
/* includes */
/* settings */
/* Schema */
var db = mongoose.connect(mongoConnect);
/* Socketio */
io.configure(function (){
io.set('authorization', function (handshakeData, callback) {
});
});
io.sockets.on('connection', function (socket) {
});//sockets
io.sockets.on('disconnect', function(socket) {
console.log('socket disconnection')
});
/* The Routing */
app.post('/login', function(req, res){
});
app.get('/invited', function(req, res){
});
app.get('/', function(req, res){
});
app.get('/logout', function(req, res){
});
app.get('/error', function(req, res){
});
server.listen(port);
console.log('Listening on port '+port);
db.connection.on('error', function(err) {
console.log("DB connection Error: "+err);
});
db.connection.on('open', function() {
console.log("DB connected");
});
db.connection.on('close', function(str) {
console.log("DB disconnected: "+str);
});
I have tried various configs here, like opening and closing all the time - I believe though, the general consensus is to do as i am with one open wrapping the lot. ??
I have tried a connection tester, that keeps checking the status of the connection... even though this appears to say everthing's ok - the issue still happens.
I have had this issue from day one. I have always hosted the MongoDB with MongoLab.
The problem appears to be worse on localhost. But i still have the issue on Azure and now nodejit.su.
As it happens everywhere - it must be me, MongoDB, or mongolab.
Incidentally i have had a similar experience with the php driver too. (to confirm this is on nodejs though)
It would be great for some help - even if someone just says "this is normal"
thanks in advance
Rob
UPDATE: Our support article for this topic (essentially a copy of this post) has moved to our connection troubleshooting doc.
There is a known issue that the Azure IaaS network enforces an idle timeout of roughly thirteen minutes (empirically arrived at). We are working with Azure to see if we can't make things more user-friendly, but in the meantime others have had success by configuring their driver options to work around the issue.
Max connection idle time
The most effective workaround we've found in working with Azure and our customers has been to set the max connection idle time below four minutes. The idea is to make the driver recycle idle connections before the firewall forces the issue. For example, one customer, who is using the C# driver, set MongoDefaults.MaxConnectionIdleTime to one minute and it cleared up their issues.
MongoDefaults.MaxConnectionIdleTime = TimeSpan.FromMinutes(1);
The application code itself didn't change, but now behind the scenes the driver aggressively recycles idle connections. The result can be seen in the server logs as well: lots of connection churn during idle periods in the app.
There are more details on this approach in the related mongo-user thread, SocketException using C# driver on azure.
Keepalive
You can also work around the issue by making your connections less idle with some kind of keepalive. This is a little tricky to implement unless your driver supports it out of the box, usually by taking advantage of TCP Keepalive. If you need to roll your own, make sure to grab each idle connection from the pool every couple minutes and issue some simple and cheap command, probably a ping.
Handling disconnects
Disconnects can happen from time to time even without an aggressive firewall setup. Before you get into production you want to be sure to handle them correctly.
First, be sure to enable auto reconnect. How to do so varies from driver to driver, but when the driver detects that an operation failed because the connection was bad turning on auto reconnect tells the driver to attempt to reconnect.
But this doesn't completely solve the problem. You still have the issue of what to do with the failed operation that triggered the reconnect. Auto reconnect doesn't automatically retry failed operations. That would be dangerous, especially for writes. So usually an exception is thrown and the app is asked to handle it. Often retrying reads is a no-brainer. But retrying writes should be carefully considered.
The mongo shell session below demonstrates the issue. The mongo shell by default has auto reconnect enabled. I insert a document in a collection named stuff then find all the documents in that collection. I then set a timer for thirty minutes and tried the same find again. It failed, but the shell automatically reconnected and when I immediately retried my find it worked as expected.
% mongo ds012345.mongolab.com:12345/mydatabase -u *** -p ***
MongoDB shell version: 2.2.2
connecting to: ds012345.mongolab.com:12345/mydatabase
> db.stuff.insert({})
> db.stuff.find()
{ "_id" : ObjectId("50f9b77c27b2e67041fd2245") }
> db.stuff.find()
Fri Jan 18 13:29:28 Socket recv() errno:60 Operation timed out 192.168.1.111:12345
Fri Jan 18 13:29:28 SocketException: remote: 192.168.1.111:12345 error: 9001 socket exception [1] server [192.168.1.111:12345]
Fri Jan 18 13:29:28 DBClientCursor::init call() failed
Fri Jan 18 13:29:28 query failed : mydatabase.stuff {} to: ds012345.mongolab.com:12345
Error: error doing query: failed
Fri Jan 18 13:29:28 trying reconnect to ds012345.mongolab.com:12345
Fri Jan 18 13:29:28 reconnect ds012345.mongolab.com:12345 ok
> db.stuff.find()
{ "_id" : ObjectId("50f9b77c27b2e67041fd2245") }
We're here to help
Of course, if you have any questions please feel free to contact us at support#mongolab.com. We're here to help.
Thanks for all the help guys - I have managed to solve this issue on both localhost and deployed to a live server.
Here is my now working connect code:
var MONGO = {
username: "username",
password: "pa55W0rd!",
server: '******.mongolab.com',
port: '*****',
db: 'dbname',
connectionString: function(){
return 'mongodb://'+this.username+':'+this.password+'#'+this.server+':'+this.port+'/'+this.db;
},
options: {
server:{
auto_reconnect: true,
socketOptions:{
connectTimeoutMS:3600000,
keepAlive:3600000,
socketTimeoutMS:3600000
}
}
}
};
var db = mongoose.createConnection(MONGO.connectionString(), MONGO.options);
db.on('error', function(err) {
console.log("DB connection Error: "+err);
});
db.on('open', function() {
console.log("DB connected");
});
db.on('close', function(str) {
console.log("DB disconnected: "+str);
});
I think the biggest change was to use "createConnection" over "connect" - I had used this before, but maybe the options help now. This article helped a lot http://journal.michaelahlers.org/2012/12/building-with-nodejs-persistence.html
If I'm honest I'm not overly sure on why I have added those options - as mentioned by #jareed, i also found some people having success with "MaxConnectionIdleTime" - but as far as i can see the javascript driver doesn't have this option: this was my attempt at trying to replicate the behavior.
So far so good - hope this helps someone.
UPDATE: 18 April 2013 note, this is a second app with a different setup
Now I thought i had this solved but the problem rose it's ugly head again on another app recently - with the same connection code. Confused!!!
However the set up was slightly different…
This new app was running on a windows box using IISNode. I didn't see this as significant initially.
I read there were possibly some issues with mongo on Azure (#jareed), so I moved the DB to AWS - still the problem persisted.
So i started playing about with that options object again, reading up quite a lot on it. Came to this conclusion:
options: {
server:{
auto_reconnect: true,
poolSize: 10,
socketOptions:{
keepAlive: 1
}
},
db: {
numberOfRetries: 10,
retryMiliSeconds: 1000
}
}
That was a bit more educated that my original options object i state.
However - it's still no good.
Now, for some reason i had to get off that windows box (something to do with a module not compiling on it) - it was easier to move than spend another week trying to get it to work.
So i moved my app to nodejitsu. Low and behold my connection stayed alive! Woo!
So…. what does this mean… I have no idea! What i do know is is those options seem to work on Nodejitsu…. for me.
I believe IISNode uses some kind of "forever" script for keeping the app alive. Now to be fair the app doesn't crash for this to kick in, but i think there must be some kind of "app cycle" that is refreshed constantly - this is how it can do continuous deployment (ftp code up, no need to restart app) - maybe this is a factor; but i'm just guessing now.
Of course all this means now, is this isn't solved. It's still not solved. It's just solved for me in my setup.
A couple of recommendations for people still having this issue:
Make sure you are using the latest mongodb client for node.js. I noticed significant improvements in this area when migrating from v1.2.x to v1.3.10 (the latest as of today)
You can pass an options object to the MongoClient.connect. The following options worked for me when connecting from Azure to MongoLab:
options = {
db: {},
server: {
auto_reconnect: true,
socketOptions: {keepAlive: 1}
},
replSet: {},
mongos: {}
};
MongoClient.connect(dbUrl, options, function(err, dbConn) {
// your code
});
See this other answer in which I describe how to handle the 'close' event which seems to be more reliable. https://stackoverflow.com/a/20690008/446681
Enable the auto_reconnect Server option like this:
var db = mongoose.connect(mongoConnect, {server: {auto_reconnect: true}});
The connection you're opening here is actually a pool of 5 connections (by default) so you're right to just connect and leave it open. My guess is that you intermittently lose connectivity with mongolab and your connections die when that occurs. Hopefully, enabling auto_reconnect resolves that.
Increasing timeouts may help.
"socketTimeoutMS" : How long a send or receive on a socket can take
before timing out.
"wTimeoutMS" : It controls how many milliseconds the server waits for
the write concern to be satisfied.
"connectTimeoutMS" : How long a connection can take to be opened
before timing out in milliseconds.
$m = new MongoClient("mongodb://127.0.0.1:27017",
array("connect"=>TRUE, "connectTimeoutMS"=>10, "socketTimeoutMS"=>10,
"wTimeoutMS"=>10));
$db= $m->mydb;
$coll = $db->testData;
$coll->insert($paramArr);
I had a similar problem being disconnected from MongoDB periodically. Doing two things fixed it:
Make sure your computer never sleeps (that'll kill your network connection).
Bypass your router/firewall (or configure it properly, which I haven't figured out how to do yet).

Resources