rebuilding connections in Nodejs, pg-promise - node.js

In the scenario where master/replica postgres connections are built using pg-promise, is there a way to rebuild these connections in case of replica outages?
Instead of doing process.exitCode = 1; in the error function passed with the initOptions and rebuilding only working connections on service start-up... Is there a better way to remove the failing connection (even better if its a replica and process.exitCode if its a primary)?
const initOptions = {
// global event notification;
error: (error, e) => {
if (e.cn) {
//log
}
process.exitCode =1;
}
};
//singleton
const pgp = require('pg-promise')(initOptions);
// then for each database in config, we connect and start the service

Module pg-promise is built on top node-postgres, which uses the connection pool, capable of restoring broken connections automatically.
There is nothing needed on your side to that end. Once the connection is available again, your queries will start succeeding again.
And according to the logic you seek, you can do process.exit() specifically when the primary connection is lost, while ignoring (or only logging) the loss of the replica connection.
Provided that the two are used through separate Database objects, you can differentiate them with the help of the dc parameter - Database Context that you can pass in during the object's construction, which can be anything.
Example:
const dbPrimary = pgp(primaryConnection, 'primary');
const dbReplica = pgp(replicaConnection, 'replica');
Then inside the global error handler:
const initOptions = {
error: (err, e) => {
if(e.cn) {
// connectivity issue:
console.log(e.dc, 'connection error:', err);
if(e.dc === 'primary') {
process.exit(1);
}
}
}
};

Related

"connection terminated unexpectedly" error with Node, Postgres on AWS Lambda

I have a number of Node functions running on AWS Lambda. These functions have been using the Node 8 runtime but AWS sent out an end-of-life notice saying that functions should be upgraded to the latest LTS. With that, I upgraded one on my functions to use Node 12. After being in production for a bit, I'm starting to see a ton of connection terminated unexpectedly errors when querying the database.
Here are the errors that I'm seeing:
The connection terminated unexpectedly error
And Error [ERR_STREAM_DESTROYED]: Cannot call write after a stream was destroyed - this seems to happen on the 1st or second invocation after seeing the connection terminated unexpectedly error.
I'm using Knex.js for querying the database. I was running older version of knex and node-postgres and recently upgraded to see if it would resolve the issue, but no luck. Here are the versions of knex and node-postgres that I'm currently running:
"knex": "^0.20.8"
"pg": "^7.17.1"
The only change I've made to this particular function is the upgrade to Node 12. I've also tried Node 10, but the same issue persists. Unfortunately, AWS won't let me downgrade to Node 8 to verify that it is indeed an issue. None of my other functions running on Node 8 are experiencing this issue.
I've researched knex, node-postgres and tarn.js (the Knex connection pooling library) to see if any related issues or solutions popped up, but so far, I haven't had any luck.
UPDATE:
Example of a handler. Note that this is happening on many different Lambdas, all running Node 12.
require('../../helpers/knex')
const { Rollbar } = require('#scoutforpets/utils')
const { Email } = require('#scoutforpets/notifications')
const { transaction: tx } = require('objection')
const Invoice = require('../../models/invoice')
// configure rollbar for error logging
const rollbar = Rollbar.configureRollbar(process.env.ROLLBAR_TOKEN)
/**
*
* #param {*} event
*/
async function handler (event) {
const { invoice } = event
const { id: invoiceId } = invoice
try {
return tx(Invoice, async Invoice => {
// send the receipt
await Email.Customer.paymentReceipt(invoiceId, true)
// convert JSON to model
const i = Invoice.fromJson(invoice)
// mark the invoice as having been sent
await i.markAsSent()
})
} catch (err) {
return err
}
}
module.exports.handler = rollbar.lambdaHandler(handler)
Starting with node.js 10 aws lambda make the handler async, so you have to adapt your code.
Docs : https://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-handler.html
The runtime passes three arguments to the handler method. The first
argument is the event object, which contains information from the
invoker. The invoker passes this information as a JSON-formatted
string when it calls Invoke, and the runtime converts it to an object.
When an AWS service invokes your function, the event structure varies
by service.
The second argument is the context object, which contains information
about the invocation, function, and execution environment. In the
preceding example, the function gets the name of the log stream from
the context object and returns it to the invoker.
The third argument, callback, is a function that you can call in
non-async functions to send a response. The callback function takes
two arguments: an Error and a response. When you call it, Lambda waits
for the event loop to be empty and then returns the response or error
to the invoker. The response object must be compatible with
JSON.stringify.
For async functions, you return a response, error, or promise to the
runtime instead of using callback.
exports.handler = async function(event, context, callback) {
console.log("EVENT: \n" + JSON.stringify(event, null, 2))
return context.logStreamName
}
Thx!
I think you need to set the right connection pooling config.
See the docs here: https://github.com/marcogrcr/sequelize/blob/patch-1/docs/manual/other-topics/aws-lambda.md
const { Sequelize } = require("sequelize");
let sequelize = null;
async function loadSequelize() {
const sequelize = new Sequelize(/* (...) */, {
// (...)
pool: {
/*
* Lambda functions process one request at a time but your code may issue multiple queries
* concurrently. Be wary that `sequelize` has methods that issue 2 queries concurrently
* (e.g. `Model.findAndCountAll()`). Using a value higher than 1 allows concurrent queries to
* be executed in parallel rather than serialized. Careful with executing too many queries in
* parallel per Lambda function execution since that can bring down your database with an
* excessive number of connections.
*
* Ideally you want to choose a `max` number where this holds true:
* max * EXPECTED_MAX_CONCURRENT_LAMBDA_INVOCATIONS < MAX_ALLOWED_DATABASE_CONNECTIONS * 0.8
*/
max: 2,
/*
* Set this value to 0 so connection pool eviction logic eventually cleans up all connections
* in the event of a Lambda function timeout.
*/
min: 0,
/*
* Set this value to 0 so connections are eligible for cleanup immediately after they're
* returned to the pool.
*/
idle: 0,
// Choose a small enough value that fails fast if a connection takes too long to be established.
acquire: 3000,
/*
* Ensures the connection pool attempts to be cleaned up automatically on the next Lambda
* function invocation, if the previous invocation timed out.
*/
evict: CURRENT_LAMBDA_FUNCTION_TIMEOUT
}
});
// or `sequelize.sync()`
await sequelize.authenticate();
return sequelize;
}
module.exports.handler = async function (event, callback) {
// re-use the sequelize instance across invocations to improve performance
if (!sequelize) {
sequelize = await loadSequelize();
} else {
// restart connection pool to ensure connections are not re-used across invocations
sequelize.connectionManager.initPools();
// restore `getConnection()` if it has been overwritten by `close()`
if (sequelize.connectionManager.hasOwnProperty("getConnection")) {
delete sequelize.connectionManager.getConnection;
}
}
try {
return await doSomethingWithSequelize(sequelize);
} finally {
// close any opened connections during the invocation
// this will wait for any in-progress queries to finish before closing the connections
await sequelize.connectionManager.close();
}
};
It's actually for sequelize, not knex, but I'm sure under the hood they work the same way.
I had this problem too, in my case it was cause i tried to connect db in production.
so, I added ssl to Pool, like this:
const pool = new Pool({
connectionString: connectionString,
ssl: {rejectUnauthorized: false},
});
Hope it helps you too...

How to handle failed mongodb connect attempt (and what is preventing my node.js from terminating)?

The below code attempts to connect to a MongoDB instance.
When connected successfully it prints, closes the connection and thus terminates as expected.
However, when the connection is not successful (e.g. the MongoDB instance is not running) the below code fails to terminate even though the client variable is undefined.
What I can do to allow it to terminate in the case of a failed connection? Repro below.
Per this post (my nodejs script is not exiting on its own after successful execution) I tried running process._getActiveRequests() and process._getActiveHandles() to see what was active and thus preventing node.js from exiting. Those show that there are indeed active requests/handles but I am not sure how to close them.
const MongoClient = require('mongodb').MongoClient;
async function main(){
let client;
try {
client = await MongoClient.connect('mongodb://localhost:27017');
console.log('Connected successfully!');
} catch (error) {
console.log(`Failed to connect:${error}`);
} finally {
if (client)
client.close();
}
}
main();

AWS Lambda Container destroy event

When to release connections and cleanup resources in lambda. In normal Node JS application we do use the hook
process.on('exit', (code) => {
console.log(`About to exit with code: ${code}`);
});
However this doesn't work on AWS Lambda. Resulting the Mysql connection in sleep mode. We don't have enough resource for such active connections. None of the AWS documentation specify a way to achieve this.
How to receive stop event of AWS Lambda container ?
The short answer is that there is no such event to know when the container is stopped.
UPDATE: I've not used this library, but https://www.npmjs.com/package/serverless-mysql appears to try to solve just this problem.
PREVIOUS LONG ANSWER:
The medium answer: after speaking with someone at AWS about this I now believe you should scope database connections at the module level so they are reused as long as the container exists. When your container is destroyed the connection will be destroyed at that point.
Original answer:
This question touches on some rather complicated issues to consider with AWS Lambda functions, mainly because of the possibility of considering connection pooling or long-lived connections to your database. To start with, Lambda functions in Node.js execute as a single, exported Node.js function with this signature (as you probably know):
exports.handler = (event, context, callback) => {
// TODO implement
callback(null, 'Hello from Lambda');
};
The cleanest and simplest way to handle database connections is to create and destroy them with every single function call. In this scenario, a database connection would be created at the beginning of your function and destroyed before your final callback is called. Something like this:
const mysql = require('mysql');
exports.handler = (event, context, callback) => {
let connection = mysql.createConnection({
host : 'localhost',
user : 'me',
password : 'secret',
database : 'my_db'
});
connection.connect();
connection.query('SELECT 1 + 1 AS solution', (error, results, fields) => {
if (error) {
connection.end();
callback(error);
}
else {
let retval = results[0].solution;
connection.end();
console.log('The solution is: ', retval);
callback(null, retval);
}
});
};
NOTE: I haven't tested that code. I'm just providing an example for discussion.
I've also seen conversations like this one discussing the possibility of placing your connection outside the main function body:
const mysql = require('mysql');
let connection = mysql.createConnection({
host : 'localhost',
user : 'me',
password : 'secret',
database : 'my_db'
});
connection.connect();
exports.handler = (event, context, callback) => {
// NOTE: should check if the connection is open first here
connection.query('SELECT 1 + 1 AS solution', (error, results, fields) => {
if (error) {
callback(error);
}
else {
let retval = results[0].solution;
console.log('The solution is: ', retval);
callback(null, retval);
}
});
};
The theory here is this: because AWS Lambda will try to reuse an existing container after the first call to your function, the next function call will already have a database connection open. The example above should probably check the existence of an open connection before using it, but you get the idea.
The problem of course is that this leaves your connection open indefinitely. I'm not a fan of this approach but depending on your specific circumstances this might work. You could also introduce a connection pool into that scenario. But regardless, you have no event in this case to cleanly destroy a connection or pool. The container process hosting your function would itself be killed. So you'd have to rely on your database killing the connection from it's end at some point.
I could be wrong about some of those details, but I believe at a high level that is what you're looking at. Hope that helps!

MongoClient on error event

I am using Node/Mongo and want to capture all MongoErrors so that I can transform then into another error format. I am looking to do this at the base level but cannot figure it out.
Here's my connection setup:
let connection = false
// Create a Mongo connection
export function getConnection () {
if (!connection) {
connection = MongoClient.connect(config.db.mongo.uri, {
promiseLibrary: Promise //Bluebird
})
}
return connection
}
// Fetch a connection for a collection
export function getCollection (collection) {
return getConnection().then(c =>
c.collection(collection)
)
}
I've tried adding on('error') to both connection and MongoClient as well as MongoClient.Db but they do not have that method. I've additionally added a catch block to my getCollection but the errors do not seem to hit it (I am testing with the MongoError 11000 for duplicate fields.
It seems that it can be done but haven't figured it out. It may be because I am using promises.
This snippet of code does it for us. I suspect you need to pass in the function that handles the 'error' event as well. Hope it helps someone who stumbles across this question in the future.
MongoClient.connect(database_string, (err, db) => {
this.db.on('error', function() {
console.error("Lost connection to mongodb(error), exiting".bold.red);
return process.exit(1);
}); // deal breaker for now
}

Keeping open a MongoDB database connection

In so many introductory examples of using MongoDB, you see code like this:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect("mongodb://localhost:port/adatabase", function(err, db)
{
/* Some operation... CRUD, etc. */
db.close();
});
If MongoDB is like any other database system, open and close operations are typically expensive time-wise.
So, my question is this: Is it OK to simply do the MongoClient.connect("... once, assign the returned db value to some module global, have various functions in the module do various database-related work (insert documents into collections, update documents, etc. etc.) when they're called by other parts of the application (and thereby re-use that db value), and then, when the application is done, only then do the close.
In other words, open and close are done once - not every time you need to go and do some database-related operation. And you keep re-using that db object that was returned during the initial open\connect, only to dispose of it at the end, with the close, when you're actually done with all your database-related work.
Obviously, since all the I/O is asynch, before the close you'd make sure that the last database operation completed before issuing the close. Seems like this should be OK, but i wanted to double-check just in case I'm missing something as I'm new to MongoDB. Thanks!
Yes, that is fine and typical behavior. start your app, connect to db, do operations against the db for a long time, maybe re-connect if the connection ever dies unexpectedly, and then just never close the connection (just rely on the automatic close that happens when your process dies).
mongodb version ^3.1.8
Initialize the connection as a promise:
const MongoClient = require('mongodb').MongoClient
const uri = 'mongodb://...'
const client = new MongoClient(uri)
const connection = client.connect() // initialized connection
And then call the connection whenever you wish you perform an action on the database:
// if I want to insert into the database...
const connect = connection
connect.then(() => {
const doc = { id: 3 }
const db = client.db('database_name')
const coll = db.collection('collection_name')
coll.insertOne(doc, (err, result) => {
if(err) throw err
})
})
The current accepted answer is correct in that you may keep the same database connection open to perform operations, however, it is missing details on how you can retry to connect if it closes. Below are two ways to automatically reconnect. It's in TypeScript, but it can easily be translated into normal Node.js if you need to.
Method 1: MongoClient Options
The most simple way to allow MongoDB to reconnect is to define a reconnectTries in an options when passing it into MongoClient. Any time a CRUD operation times out, it will use the parameters passed into MongoClient to decide how to retry (reconnect). Setting the option to Number.MAX_VALUE essentially makes it so that it retries forever until it's able to complete the operation. You can check out the driver source code if you want to see what errors will be retried.
class MongoDB {
private db: Db;
constructor() {
this.connectToMongoDB();
}
async connectToMongoDB() {
const options: MongoClientOptions = {
reconnectInterval: 1000,
reconnectTries: Number.MAX_VALUE
};
try {
const client = new MongoClient('uri-goes-here', options);
await client.connect();
this.db = client.db('dbname');
} catch (err) {
console.error(err, 'MongoDB connection failed.');
}
}
async insert(doc: any) {
if (this.db) {
try {
await this.db.collection('collection').insertOne(doc);
} catch (err) {
console.error(err, 'Something went wrong.');
}
}
}
}
Method 2: Try-catch Retry
If you want more granular support on trying to reconnect, you can use a try-catch with a while loop. For example, you may want to log an error when it has to reconnect or you want to do different things based on the type of error. This will also allow you to retry depending on more conditions than just the standard ones included with the driver. The insert method can be changed to the following:
async insert(doc: any) {
if (this.db) {
let isInserted = false;
while (isInserted === false) {
try {
await this.db.collection('collection').insertOne(doc);
isInserted = true;
} catch (err) {
// Add custom error handling if desired
console.error(err, 'Attempting to retry insert.');
try {
await this.connectToMongoDB();
} catch {
// Do something if this fails as well
}
}
}
}
}

Resources