After many successive and successful queries SQL Server starts refusing connections - node.js

I have a GraphQL server which uses typeorm to connect to a SQL Server, also it runs inside a docker container, connection parameters are:
let connection = await createConnection({
type: "mssql",
host: HOST,
port: PORT,
database: DB,
username: USER,
password: PASSWORD,
// NOTE: Use typeorm-model-generator for Entities
logging: false,
synchronize: true,
requestTimeout: 300000,
connectionTimeout: 300000,
entities: ["./models/mssql"],
pool: {
max: 1000, min: 1,
idleTimeoutMillis: 3000,
evictionRunIntervalMillis: 1500000,
},
options: {
encrypt: false,
}
})
Problem:
After numerous sucessful queries the SQL Server starts refusing connections with the error:
'ConnectionError: Failed to connect to x.x.x.x:port - connect ECONNREFUSED x.x.x.x:port'
The problem happens when query a specific type which looking down the hierarchy has a lot of resolvers, also the amount of data varies.
The more data the more likely that the problem will occur.
What also interesting is that it is more likely that the problem will occur if the program is running inside a container or as PM2 service.
I've tested the same data where connections are refused inside a container and not refused in VS Code debugger or when run from terminal.
I tried fiddling with the pool options which gave modest results.
I've also checked that i do not exceed the amount of connections allowed by the DB.
I've checked the logs on the SQL Server, there are no issues or errors in them at the time I make requests.
Edit:
I
I've just now added support for tracking log attempts on the SQL Server and it now registers some 776 successful logins from this application in period 15:03:46.01 - 15:04:48.07 and after that I assume the I get the errors
II
I've tried changing the Network Packet Size option in Sql Server options and packetSize connection options of typeorm -> mssql -> tedious which didn't work but revealed some new details. Since I had to reset the server apply new Sql Options, immediately after the reset the queries successfully finished.
So I tried tracking the resources and saw that the server ramps up to 100% processor capacity pretty quickly and after that has problems opening new connections and if I allow a too large max in pool options (in my case 500) of connections it start refusing connections.
When I bring down the max property, however, a new type of error arises which manifests as connection timed out from unknown reason this time stemming from tarn.js.
III
What I think is happening is that the tarm.js(underlying connection pool) has a default timeout for createTimeoutMillis/ adding new connections to pool, and this option is not revealed through the typeorm API and while it waits on the server to unfreeze and allow a new connection to be allotted it times out.

Related

postgresql and node with redis still making connection to db pool?

I'm a bit stuck here and was hoping to get some help.
My node application has a seperate module where I connect to postgres and export the pool as so
const {Pool,Client} = require('pg');
const pool = new Pool({
user: process.env.POSTGRES_USER,
host: process.env.POSTGRES_URL,
database: process.env.POSTGRES_DATABASE,
password: process.env.POSTGRES_PASSWORD,
port: process.env.POSTGRES_PORT,
keepAlive: 0,
ssl:{ rejectUnauthorized: false,
sslmode:require},
connectionTimeoutMillis: 10000, // 10 seconds
allowExitOnIdle:true,
max: 10
});
pool.connect()
.then(() => console.log('postgress connected'))
.catch(err => console.error(err))
module.exports = pool
On my route, I have redis cache as middleware, this works as expected and can confirm it is being served up by redis, the logic in the route does not run when the request is cached, however I was doing some load testing to see how everything would handle spikes and noticed I started to get errors from postgres
Error: timeout exceeded when trying to connect
I also got errors talking about max connections etc.
I have tried to increase the max pool connection but still seem to get this error when running some larger load tests.
My question is, why, would PG be trying to connect if the connection should be shared? Additionally, why is it even trying to connect if the request is cached?
Any help would be appreciated!
Apparently some of your stress test cases are missing the redis cache. You haven't shown any code relevant to that, so what more can be said?
The error you show is not generated by PostgreSQL, it is generated by node's 'pg' module. You configured it to only allow 10 simultaneous connections. If more than that are requested, they have to line up and wait. And you also configured it to wait only for 10 seconds before bombing out with an error, and that is exactly what you are seeing.
You vaguely allude to other errors, but you would have to share the actual error message with us if you want help.
The system seems to be operating as designed. You did a stress test to see what would happen, and you have seen what happens.

Mongoose connection to replica set not working

I am running my own MongoDb Replica Set on Kubernetes.
It has 3 members, I exposed them all via NodePort.
I can connect to it via shell:
(feel free to connect, it's an empty, isolated example that will be destroyed)
mongo mongodb://stackoverflow:practice#134.122.99.184:31064,134.122.99.184:31086,134.122.99.184:32754/thirty3?authSource=admin
However, I cannot connect to it via mongoose 5.11.12 using the same connection string.
It only works until mongoose 4.5.8
mongoose.connect("mongodb://stackoverflow:practice#134.122.99.184:31064,134.122.99.184:31086,134.122.99.184:32754/thirty3?authSource=admin&replicaSet=thirty3&?retryWrites=true&w=majority",
{
useNewUrlParser: true,
poolSize: 5,
useUnifiedTopology: true,
serverSelectionTimeoutMS: 5000, // Timeout after 5s instead of 30s
})
I tried tons of configurations, gssapiServiceName=mongodb, replicaSetName=thirty3 (I checked the replica set name by running rs.conf() ) and many more other configurations.
My question is - is there something wrong with mongoose handling these types of communications?
I have found similar issues that indicate downgrading as a solution, but downgrading is not ideal unless impossible to fix it normally.
Please try the code samples above, the database is open for connections with the credentials exposed.
This configuration works for me in Local MongoDB with a replica set.
await mongoose.connect("mongodb://localhost:27017/movieku", { family: 4 })
Source: https://docs.w3cub.com/mongoose/connections

Node redis ETIMEOUT issue

I've been using node-redis for a while and so far so good. However, upon setting up a new environment, I had a typo on the hostname (or password) and it wouldn't connect. But because this was an already working application I developed some time ago, it was kind of hard to track the actual issue. When you made requests against this server, it would just take up to the server's timeout which was 5 minutes and come back with error 500.
At the end I found out that it was the credentials for the redis server. I use redis to make my app faster by preventing revalidating security tokens for up to an hour (since the validation process can take up to 2000ms), so I store the token on redis for future requests.
This has worked fine for years, however, just because this time I had a typo on the hostname or password, I noticed that if the redis server can't connect (for whaterver reason) the whole application goes down. The idea is that redis should be used if available, if not it should fallback to just take the long route but fulfill the request anyway.
So my question is, how to tell node-redis to throw an error as soon as possible, and not wait until ETIMEOUT error comes?
For example:
const client = redis.createClient(6380, "redis.host.com", { password: "verystrongone" } });
client.on("error", err => {
console.log(err)
})
Based on this code, I get the console.log error AFTER it reaches timeout (around 30-40 seconds). This is not good, because then my application is AT LEAST 30 seconds unresponsive. What I want to achieve is that if the redis is down or something, it should just give up after 2-5 seconds. I use a very fast and reliable redis server from Azure. It takes less than a second to connect, and has never failed, I believe, but if it does, it will take the whole application with it.
I tried stuff like retry_strategy but I believe that option kicks in only after the initial ~30 seconds attempt.
Any suggestions?
So here's an interesting thing I observed.
When I connect to the redis cache instance using the following options, I am able to reproduce the error you're getting.
port: 6380,
host: myaccount.redis.cache.windows.net,
auth_pass: mysupersecretaccountkey
When I specify incorrect password, I get an error after 1 minute.
However, if I specify tls parameter I get an error almost instantaneously:
port: 6380,
host: myaccount.redis.cache.windows.net,
auth_pass: mysupersecretaccountkey,
tls: {
servername: myaccount.redis.cache.windows.net
}
Can you try with tls option?
I am still not able to reproduce the error if I specify incorrect account name. I get the following error almost instantaneously:
Redis connection to
myincorrectaccountname.redis.cache.windows.net:6380 failed -
getaddrinfo ENOTFOUND myincorrectaccountname.redis.cache.windows.net

1 db connection for all functions in API call

I have a route that handles API calls for timepunches. One of the calls is to "clock_in".
router.route('/clock_in').post(managerCheck, startTimeCheck, isClockedIn, clockIn);
Each of these functions will perform it's own db connection, query the db for some info, then respond to the user or go to the next() function.
I'm using pool from 'pg-poll'.
My connection looks like this.
export const **isClockedIn** = (request, response, next) => {
const query = `select * from....`;
const values = [value1, value2];
pool.connect((err, client, release) => {
client.query(query, values, (err, result) => {
//do stuff
}
and the connection is essentially the same for all functions.
What i'd like to do is have only 1 instance of pool.connect then each function in the api call will use that connection to do their client.query. I'm just not sure how i'd set that up.
Hopefully my question is clear. All my code works, it's just not efficient since it's making multiple db connections for 1 api call.
I learned a lot by watching my db connections as I made calls from my API.
When you make your first call with pg.pool a connection will be made to the db. After your query finishes the connection is placed into an idle state, if another pg.pool command is run, it will use that idle connection. The connection will close after 10 seconds of being idle (you can configure this).
You can also set a max amount of connections (default 10). So if you run 10 queries at the same time, they will all open a connection and run. Their connections will be left idle after completion. If you run another 10 at the same time, they will reuse those connections.
if you want to force only 1 connection ever that never closes (not saying you want to do this), you set idle timeout to 0, and max 1 connection. Then if you run 10 queries at once, they will line up and run one at a time.
const pool = new pg.Pool({
user: 'postgres',
host: 'localhost',
database: 'database',
password: 'password',
port: 5000,
idleTimeoutMillis: 0,
max: 1,
});
This page is super helpful, although I didn't understand much of it until I watched the database connection as my API ran.
https://node-postgres.com/api/pool
Note: The above code should be in it's own js file and all connections should reference it. If you create new pg.Pools I believe those will open their own connections which may not be what you want.

Sequelize unable to connect to SQL Server through network

My setup is the following:
I have a Virtual Machine running all of my Database processes, let's call it DB-VM.
I'm currently developing at my own workstation (completely detached from DB-VM, except that we are under the same network.
I've created a valid connection string, validated by another database connection service throughout IIS and through a Data Link Properties file (.udl) and the connection.
This connection is described by the connection string as:
Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Data Source=DB-VM\MY_DATABASE.
I tried to insert it into my Sequelize configuration as following:
const sequelize = new Sequelize({
dialect: 'mssql',
dialectModulePath: 'sequelize-msnodesqlv8',
dialectOptions: {
connectionString: 'Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Data Source=DB-VM\MY_DATABASE',
trustedConnection: true,
}
});
And then proceeded to try and authenticate through:
sequelize.authenticate().then(() => {
console.log('Connection stablished successfully!');
}).catch(err => {
console.log(err);
});
And this the error is as follows:
Notice: The database uses dynamic ports, therefore I can't specify the port through the port property.
Notice 2: The Named Pipes are disabled on my database settings, and I'm not sure if I will be able to enabled it.
Notice 3: The database is already setup to allow remote connections (it is currently used through a Webpage and works fine!
According to this line the sequelize-msnodesqlv8 library expects "Driver" to be a part of the connection string, otherwise it tries to guess. Besides that, all the examples of connection strings here1 and here2 are using either Server=myServerName\theInstanceName or just Server=myServerName. Instead of the Data Source=....
So step 1 is to fix you connection string. You could try one of the examples like:
dialectOptions: {
connectionString: 'Driver={SQL Server Native Client 10.0};Server=DB-VM;Database=MY_DATABASE;Trusted_Connection=yes;'
},
After that if you get a new error, please update the question.

Resources