Knex: Timeout acquiring a connection - node.js

Since today, I get the following error when I try to locally connect to a postgres database (v 12) using knex.js.
Unhandled rejection TimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
This happens on a project I've been working on for a year without any problems. Trying to isolate the issue, I created a new database with one table. When running the following lines of code, I get the same error:
const knex = require('knex');
const db = knex({
client: 'pg',
connection: 'postgresql://postgres:postgres#localhost/a_test',
pool: {
min: 0,
max: 10,
},
});
db.from('test_table')
.select(['id'])
.then(r => {
console.log(r);
});
I have no clue what might cause this. A couple of weeks ago everything worked fine and I didn't change anything in the meantime. I run postgres locally with postgresapp and when I connect to the database using psql, everything works fine. Any ideas where I could look to resolve this?

The problem
Nodejs V14 Made some breaking changes that affected the pg module! Which made it exit directly at connect() call.
One can know by downgrading to v13! (I call it the v14 HELL)! Which was a solution in the past!
A fix for pg was written in pg v8.0.3.
Fix for v14
If you are using postgres! With nodejs v14 and above ! Make sure to use the driver module pg at version >=8.0.3! And better upgrade to the latest
npm install pg#latest --save
If you are not using postgres! Try to update your DB driver! It may be the same! Also try with nodejs V13. To confirm it's the same problem! (V14 HELL)
What did happen in v14
If like me you like to know the details and what did happen !?
With node V14! Some breaking changes happened on the api! Also many things were changed! Including Openssl version!
For postgres! And pg module! The problem was as described in this comment per this thread:
The initial readyState (a private/undocumented API that
pg uses) of net.Socket seems to have changed from 'closed' to 'open'
in Node 14.
It’s hard to fix with perfect backwards compatibility, but I think I
have a patch that’s close enough.
And as per this PR!
You can see the changes in this diffing
In short as mentioned! The api for onReady changed for a net.Socket !
And the implemented solution was to not use onReady at all!
And as per this
Connection now always calls connect on its stream when connect is called on it.
In the older version the connect was called only if the socket is on closed state! readyState usage is eliminated!
Check this line
You can understand!
Depending on the implementation! Many things may or not be affected by those core changes!
Nodejs v14 relevant change
And because i wanted to see where the change happen! Here you go
https://github.com/nodejs/node/pull/32272
One can check the log of changes too:
https://github.com/nodejs/node/blob/master/doc/changelogs/CHANGELOG_V14.md
Detailed Why + exit and no logging error
Also to mention the breaking changes! Made pg make the process exit at connect() call. And that's what made it exit! And logging was to be seen!
In more detail for this! Here how it happened! Sequelize have the postgres dialect implementation! Which use pg! And pg client! create a connection! The connection have a connect event! When it connect it emit it! And because node v14 change the behavior of a stream to starting with open! The stream connection is skipped! Because of the readyState check (expected as close but it became open instead!)! And the stream is taken as connected (else block)! Where it is not! And the connect event is emitted directly! When that happen! The client either will call requestSsl() or startup() method of the connection object! And both will call this._stream.write. because the stream is not connected! An error happen! This error is not catch! Then the promise in sequelize driver! Will stay unresolved! And then the event loop get empty! Nodejs by default behavior just exit!
You can see the step through lines of code:
Sequelize pg adapter will call pg client to create a connection and the promise
pg client call connect on a connection object
pg connection connect() call and emit connect! Thinking the stream is connected because of V14 change
pg client connect event catched and callback run! requestSsl() or startup() will be run
One of the method get run and stream.write will be called (requestSsl(), startup())
Stream Error (not catched)
Promise in sequelize postgres adapter! Still unresolved!
event loop empty => Nodejs => Exit
Why nodejs exit (unresolved promises)
https://github.com/nodejs/node/issues/22088
Node exits without error and doesn't await promise (Event callback)
what happens when a Promise never resolves?

Looks like with Node 14 newer (>8.0.3) pg driver version should be used. https://github.com/knex/knex/issues/3912

It is a fact that this error can be caused by very many issues, today I found out a new one the hard way after scrolling up and down countless threads like this one to no avail.
when setting up the pool, there knex allows us to optionally register afterCreate callback, if this callback is added it is imperative that you make the call to the done callback that is passed as the last parameter to your registered callback or else no connection will be acquired leading to timeout.
.....
pool: {
afterCreate: (conn, done) => {
// .... add logic here ....
// you must call with new connection
done(null, conn);
},
}
.....

I just upgraded my psql using npm install pg#latest --save and my knex now is working.

Turns out it was the problem was node v14. When I use v13 or earlier it works.

"express": "^4.16.2",
"knex": "^0.14.2",
"objection": "^2.1.3",
"pg": "^8.0.3",
and npm install
i fixed my problem (end of the 4 day)

Related

Redis Error "max number of clients reached"

I am running a nodeJS application using forever npm module.
Node application also connects to Redis DB for cache check. Quite often the API stops working with the following error on the forever log.
{ ReplyError: Ready check failed: ERR max number of clients reached
at parseError (/home/myapp/core/node_modules/redis/node_modules/redis-parser/lib/parser.js:193:12)
at parseType (/home/myapp/core/node_modules/redis/node_modules/redis-parser/lib/parser.js:303:14)
at JavascriptRedisParser.execute (/home/myapp/ecore/node_modules/redis/node_modules/redis-parser/lib/parser.js:563:20) command: 'INFO', code: 'ERR' }
when I execute the client list command on the redis server it shows too many open connections. I have also set the timeout = 3600 in my Redis configuration.
I do not have any unclosed Redis connection object on my application code.
This happens once or twice in a week depending on the application load, as a stop gap solution I am restarting the node server( it works ).
What could be the permanent solution in this case?
I have figured out why. This has nothing to do with Redis. Increasing the OS file descriptor limit was just a temporary solution. I was using Redis in a web application and the connection was created for every new request.
When the server was restarted occasionally, all the held-up connections by the express server were released.
I solved this by creating a global connection object and re-using the same. The new connection is created only when necessary.
You could do so by creating a global connection object, make a connection once, and make sure it is connected before every time you use that. Check if there is an already coded solution depending on your programming language. In my case it was perl with dancer framework and I used a module called Dancer2::Plugin::Redis
redis_plugin
Returns a Dancer2::Plugin::Redis instance. You can use redis_plugin to
pass the plugin instance to 3rd party modules (backend api) so you can
access the existing Redis connection there. You will need to access
the actual methods of the the plugin instance.
In case if you are not running a web-server and you are running a worker process or any background job process, you could do this simple helper function to re-use the connection.
perl example
sub get_redis_connection {
my $redis = Redis->new(server => "www.example.com:6372" , debug => 0);
$redis->auth('abcdefghijklmnop');
return $redis;
}
...
## when required
unless($redisclient->ping) {
warn "creating new redis connection";
$redisclient = get_redis_connection();
}
I was running into this issue in my chat app because I was creating a new Redis instance each time something connected rather than just creating it once.
// THE WRONG WAY
export const getRedisPubSub = () => new RedisPubSub({
subscriber: new Redis(REDIS_CONNECTION_CONFIG),
publisher: new Redis(REDIS_CONNECTION_CONFIG),
});
and where I wanted to use the connection I was calling
// THE WRONG WAY
getNewRedisPubsub();
I fixed it by just creating the connection once when my app loaded.
export const redisPubSub = new RedisPubSub({
subscriber: new Redis(REDIS_CONNECTION_CONFIG),
publisher: new Redis(REDIS_CONNECTION_CONFIG),
});
and then I passed the one-time initialized redisPubSub object to my createServer function.
It was this article here that helped me see my error: https://docs.upstash.com/troubleshooting/max_concurrent_connections

Closing mongodb connections

I use native mongodb driver with Expressjs, I don't want to open and close connections on a regular basis in my routes. I want to open once use one connection in all my next() functions and then close when done.
I saw that after opening a connection I can pass the db object to next() functions in request object and use it.
When I try to close the connection like var db=req.db; db.close(); it throws an error.
Inorder to solve this issue I have decided to use settimeout function to close my connection.
I can pass around and use the db obj and send response and the after 1-2 seconds db obj is closed by settimeout.
I'm worried if the requests are more will the settimeout functions effect the servers performance. If I use this trick to manage my db connections.
Sorry, I forgot to do:
db=client.db('test')
and:
db.close();
Change in later versions can't directly open db without getting the client, using native mongodb-nodejs driver.

Connect only once to Mongo

UPDATE: While building an example, I found that the culprit seems to be restify-enroute. The package is messing with mongoose connection. I published the example to https://github.com/HeavyStorm/mongoose-enroute-conflict.
I had understood that mongoose.connect was how to establish a connection between mongoose and Mongo, and supposed that this was a singleton connection of sorts - once called, every module in my app would be able to call Mongo.
This is probably wrong.
My current scenario:
I'm calling connect:
mongoose.connect("mongodb://localhost/test");
My apps structure:
/
/app.js
/modules
./users
./module.js
In app.js I call mongoose.connect. I also load and expose a route from module.js. When I receive a http call, module.js code kicks in (I can see that from the debugger), but as soon as I call mongo (through Model.find() in this case), well, code skips, my callbacks are never called in the client is held on a waiting state.
However, if I add the mongoose.connect line to module.js, the Model.find() yields to the callback almost instantly and the client's receives the response.
TLDR: Do I have to call mongoose.connect on every module that access the database? Why is that?

Mongoose calls hangs

I haven't worked on my PC for few days.
Suddenly all the calls to mongo via mongoose hangs up, the callbacks are not called.
I checked that my call to .connect works, and that the connection state is 1 (connected).
I also made sure mongo service is running on localhost and the appropriate port 27017, and I can use the mongo console and query the db manually.
I also scanned the Internet for solutions but all I found was 'check that you're actually connected', and I verified that already.
Mongoose version 2.15.0, mongo version 2.4.9 and node js version is 4.4.2.
I fixed it.
Problem was duplicate references to the mongoose module.
I had a mongoose reference locally (which was connected), but my schema was present higher in the node_modules hierarchy, and it have used another mongoose instance which had no connection.
Once I removed the duplicate mongoose modules (npm uninstall mongoose one of them) it worked.
Above solutions didn't work for me so I fixed with following solution.
I had same issue where my db calls used to hang with no invocation to my callbacks or the promise resolution.
The problem was I used "createConnection()" to establish the connection with the db. But it didn't work perfectly.
Instead using "connect()" and "connection" imports worked.
Here is the sample code. Hope this helps.
import { connect, connection } from "mongoose";
const mongoUri = `mongodb://${my_mongo_host_&_port}`;
connect(mongoUri, {}); //to connect to my standalone db
//"connection" to listen to events
connection.on("connected", () => {
console.log("MongoDB connection established!", mongoUri);
});
I am working with "mongoose": "^6.6.1".

How do I debug error ECONNRESET in Node.js?

I'm running an Express.js application using Socket.io for a chat webapp
and I get the following error randomly around 5 times during 24h.
The node process is wrapped in forever and it restarts itself immediately.
The problem is that restarting Express kicks my users out of their rooms
and nobody wants that.
The web server is proxied by HAProxy. There are no socket stability issues,
just using websockets and flashsockets transports.
I cannot reproduce this on purpose.
This is the error with Node v0.10.11:
events.js:72
throw er; // Unhandled 'error' event
^
Error: read ECONNRESET //alternatively it s a 'write'
at errnoException (net.js:900:11)
at TCP.onread (net.js:555:19)
error: Forever detected script exited with code: 8
error: Forever restarting script for 2 time
EDIT (2013-07-22)
Added both socket.io client error handler and the uncaught exception handler.
Seems that this one catches the error:
process.on('uncaughtException', function (err) {
console.error(err.stack);
console.log("Node NOT Exiting...");
});
So I suspect it's not a Socket.io issue but an HTTP request to another server
that I do or a MySQL/Redis connection. The problem is that the error stack
doesn't help me identify my code issue. Here is the log output:
Error: read ECONNRESET
at errnoException (net.js:900:11)
at TCP.onread (net.js:555:19)
How do I know what causes this? How do I get more out of the error?
Ok, not very verbose but here's the stacktrace with Longjohn:
Exception caught: Error ECONNRESET
{ [Error: read ECONNRESET]
code: 'ECONNRESET',
errno: 'ECONNRESET',
syscall: 'read',
__cached_trace__:
[ { receiver: [Object],
fun: [Function: errnoException],
pos: 22930 },
{ receiver: [Object], fun: [Function: onread], pos: 14545 },
{},
{ receiver: [Object],
fun: [Function: fireErrorCallbacks],
pos: 11672 },
{ receiver: [Object], fun: [Function], pos: 12329 },
{ receiver: [Object], fun: [Function: onread], pos: 14536 } ],
__previous__:
{ [Error]
id: 1061835,
location: 'fireErrorCallbacks (net.js:439)',
__location__: 'process.nextTick',
__previous__: null,
__trace_count__: 1,
__cached_trace__: [ [Object], [Object], [Object] ] } }
Here I serve the flash socket policy file:
net = require("net")
net.createServer( (socket) =>
socket.write("<?xml version=\"1.0\"?>\n")
socket.write("<!DOCTYPE cross-domain-policy SYSTEM \"http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd\">\n")
socket.write("<cross-domain-policy>\n")
socket.write("<allow-access-from domain=\"*\" to-ports=\"*\"/>\n")
socket.write("</cross-domain-policy>\n")
socket.end()
).listen(843)
Can this be the cause?
You might have guessed it already: it's a connection error.
"ECONNRESET" means the other side of the TCP conversation abruptly closed its end of the connection. This is most probably due to one or more application protocol errors. You could look at the API server logs to see if it complains about something.
But since you are also looking for a way to check the error and potentially debug the problem, you should take a look at "How to debug a socket hang up error in NodeJS?" which was posted at stackoverflow in relation to an alike question.
Quick and dirty solution for development:
Use longjohn, you get long stack traces that will contain the async operations.
Clean and correct solution:
Technically, in node, whenever you emit an 'error' event and no one listens to it, it will throw. To make it not throw, put a listener on it and handle it yourself. That way you can log the error with more information.
To have one listener for a group of calls you can use domains and also catch other errors on runtime. Make sure each async operation related to http(Server/Client) is in different domain context comparing to the other parts of the code, the domain will automatically listen to the error events and will propagate it to its own handler. So you only listen to that handler and get the error data. You also get more information for free.
EDIT (2013-07-22)
As I wrote above:
"ECONNRESET" means the other side of the TCP conversation abruptly closed its end of the connection. This is most probably due to one or more application protocol errors. You could look at the API server logs to see if it complains about something.
What could also be the case: at random times, the other side is overloaded and simply kills the connection as a result. If that's the case, depends on what you're connecting to exactly…
But one thing's for sure: you indeed have a read error on your TCP connection which causes the exception. You can see that by looking at the error code you posted in your edit, which confirms it.
A simple tcp server I had for serving the flash policy file was causing this. I can now catch the error using a handler:
# serving the flash policy file
net = require("net")
net.createServer((socket) =>
//just added
socket.on("error", (err) =>
console.log("Caught flash policy server socket error: ")
console.log(err.stack)
)
socket.write("<?xml version=\"1.0\"?>\n")
socket.write("<!DOCTYPE cross-domain-policy SYSTEM \"http://www.macromedia.com/xml/dtds/cross-domain-policy.dtd\">\n")
socket.write("<cross-domain-policy>\n")
socket.write("<allow-access-from domain=\"*\" to-ports=\"*\"/>\n")
socket.write("</cross-domain-policy>\n")
socket.end()
).listen(843)
I had a similar problem where apps started erroring out after an upgrade of Node. I believe this can be traced back to Node release v0.9.10 this item:
net: don't suppress ECONNRESET (Ben Noordhuis)
Previous versions wouldn't error out on interruptions from the client. A break in the connection from the client throws the error ECONNRESET in Node. I believe this is intended functionality for Node, so the fix (at least for me) was to handle the error, which I believe you did in unCaught exceptions. Although I handle it in the net.socket handler.
You can demonstrate this:
Make a simple socket server and get Node v0.9.9 and v0.9.10.
require('net')
.createServer( function(socket)
{
// no nothing
})
.listen(21, function()
{
console.log('Socket ON')
})
Start it up using v0.9.9 and then attempt to FTP to this server. I'm using FTP and port 21 only because I'm on Windows and have an FTP client, but no telnet client handy.
Then from the client side, just break the connection. (I'm just doing Ctrl-C)
You should see NO ERROR when using Node v0.9.9, and ERROR when using Node v.0.9.10 and up.
In production, I use v.0.10. something and it still gives the error. Again, I think this is intended and the solution is to handle the error in your code.
Had the same problem today.
After some research i found a very useful --abort-on-uncaught-exception node.js option. Not only it provides much more verbose and useful error stack trace, but also saves core file on application crash allowing further debug.
I also get ECONNRESET error during my development, the way I solve it is by not using nodemon to start my server, just use "node server.js" to start my server fixed my problem.
It's weird, but it worked for me, now I never see the ECONNRESET error again.
I was facing the same issue but I mitigated it by placing:
server.timeout = 0;
before server.listen. server is an HTTP server here. The default timeout is 2 minutes as per the API documentation.
Yes, your serving of the policy file can definitely cause the crash.
To repeat, just add a delay to your code:
net.createServer( function(socket)
{
for (i=0; i<1000000000; i++) ;
socket.write("<?xml version=\"1.0\"?>\n");
…
… and use telnet to connect to the port. If you disconnect telnet before the delay has expired, you'll get a crash (uncaught exception) when socket.write throws an error.
To avoid the crash here, just add an error handler before reading/writing the socket:
net.createServer(function(socket)
{
for(i=0; i<1000000000; i++);
socket.on('error', function(error) { console.error("error", error); });
socket.write("<?xml version=\"1.0\"?>\n");
}
When you try the above disconnect, you'll just get a log message instead of a crash.
And when you're done, remember to remove the delay.
Another possible case (but rare) could be if you have server to server communications and have set server.maxConnections to a very low value.
In node's core lib net.js it will call clientHandle.close() which will also cause error ECONNRESET:
if (self.maxConnections && self._connections >= self.maxConnections) {
clientHandle.close(); // causes ECONNRESET on the other end
return;
}
ECONNRESET occurs when the server side closes the TCP connection and your request to the server is not fulfilled. The server responds with the message that the connection, you are referring to a invalid connection.
Why the server sends a request with invalid connection?
Suppose you have enabled a keep-alive connection between client and server. The keep-alive timeout is configured to 15 seconds. This means that if keep-alive is idle for 15 seconds, it will send connection close request. So after 15 seconds, server tells the client to close the connection. BUT, when server is sending this request, client is sending a new request which is already on flight to the server end. Since this connection is invalid now, server will reject with ECONNRESET error. So the problem occurs due to fewer requests to the server end. So please disable keep-alive and it will work fine.
I had this Error too and was able to solve it after days of debugging and analysis:
my solution
For me VirtualBox (for Docker) was the Problem. I had Port Forwarding configured on my VM and the error only occured on the forwarded port.
general conclusions
The following observations may save you days of work I had to invest:
For me the problem only occurred on connections from localhost to localhost on one port. -> check changing any of these constants solves the problem.
For me the problem only occurred on my machine -> let someone else try it.
For me the problem only occurred after a while and couldn't be reproduced reliably
My Problem couldn't be inspected with any of nodes or expresses (debug-)tools. -> don't waste time on this
-> figure out if something is messing around with your network (-settings), like VMs, Firewalls etc., this is probably the cause of the problem.
I solved the problem by simply connecting to a different network. That is one of the possible problems.
As discussed above, ECONNRESET means that the TCP conversation abruptly closed its end of the connection.
Your internet connection might be blocking you from connecting to some servers. In my case, I was trying to connect to mLab ( cloud database service that hosts MongoDB databases). And my ISP is blocking it.
I had resolved this problem by:
Turning off my wifi/ethernet connection and turn on.
I typed: npm update in terminal to update npm.
I tried to log out from the session and log in again
After that I tried the same npm command and the good thing was it worked out. I wasn't sure it is that simple.
I am using CENTOS 7
I just figured this out, at least in my use case.
I was getting ECONNRESET. It turned out that the way my client was set up, it was hitting the server with an API call a ton of times really quickly -- and it only needed to hit the endpoint once.
When I fixed that, the error was gone.
I had the same issue and it appears that the Node.js version was the problem.
I installed the previous version of Node.js (10.14.2) and everything was ok using nvm (allow you to install several version of Node.js and quickly switch from a version to another).
It is not a "clean" solution, but it can serve you temporarly.
Try adding these options to socket.io:
const options = { transports: ['websocket'], pingTimeout: 3000, pingInterval: 5000 };
I hope this will help you !
Node JS socket is non-blocking io. Consider using a non-blocking io connection from other sources. For instance, if you use a blocking Java socket with node it will only work for a few seconds after which the error will be served. Mitigate this by implementing a non-blocking connection I.e. socketchannel with the selector.
First I run my app I got ECONNRESET after that I got error like ECONNREFUSED . I had faced both of this problem while running my node app.For both of the Problem, I found that this was occuring because of not starting the wampserver.I am using mysql database in my app for getting the data with the help of wampserver. I resolve this by starting the wampserver and then after running my node app. It works fine.You can use node or nodemon for running the node application It's not the problem in my case.
Few options I tried and worked as a temporary solutions
If using node, try to switch between different node versions using node use #version#. Worked for me
Try switching internet connection

Resources