Failing to automatically re-connect to New PRIMARY after a replica set failover , from Mongoose (MongoDB, NodeJS Driver) - node.js

I made a simple NodeJS App, with Mongoose as MongoDB Driver. And connected to a mongodb replica set. The App is working fine until I shut down the current primary, When the PRIMARY is down, the replica set automatically elected a new PRIMARY. But, after that the node application doesn't seems to be responding for DB queries.
CODE: DB Connection
var options = {
server: {
socketOptions: {
keepAlive: 1,
connectTimeoutMS:30000,
socketTimeoutMS:90000 }
},
replset: {
socketOptions: {
keepAlive: 1,
connectTimeoutMS : 30000 ,
socketTimeoutMS: 90000
},
rs_name: 'rs0'
} };
var uri = "mongodb://xxx.xxx.xxx.xxx:27017,xxx.xxx.xxx.xxx:27017,xxx.xxx.xxx.xxx:27017/rstest";
mongoose.connect(uri,options);
CODE: DB Query
router.('/test',function(req,res){
var testmodel = new testModel('test') ;
testmodel.save(function (err, doc,numberAffected) {
if (err) {
console.log("ERROR: "+ err);
res.status = 404;
res.end;
}else{
console.log("Response sent ");
res.status = 200;
res.end;
}
});
});
Steps Followed
Created a MongoDB replica set in three VMs.
Created a simple nodeJS App (Express + Mongoose) with a test API as above
Sent GET request to 'test' continuously with some time interval to the app from a local system.
Took the PRIMARY instance down
Console will log "ERROR: Error: connection closed"
APPLICATION STOPPED RESPONDING TO REQUESTS
Varsions:
"express": "4.10.6",
"mongodb": "1.4.23",
"mongoose": "3.8.21",
A sample app that I have done for debugging this issue is available at https://melvingeorge#bitbucket.org/melvingeorge/nodejsmongorssample.git
I am not sure if this is a bug or some mis-configuration from my end. How to solve this issue ?

Write operations are made only on master instance. It will take some time for replica set to select a new primary server.
from http://docs.mongodb.org/manual/faq/replica-sets/
How long does replica set failover take?
It varies, but a replica set will select a new primary within a
minute.
It may take 10-30 seconds for the members of a replica set to declare
a primary inaccessible. This triggers an election. During the
election, the cluster is unavailable for writes.
The election itself may take another 10-30 seconds.
check your code with read operations (find/count)
as long as there is not a master instance, you can't do write operations

The 'rs_name' in replset options is necessary to specify a replicaSet. You can use mongoose.createConnection(uri, conf, callback) and get final conf in callback.

It looks like this got fixed in NODE-818 / 2.2.10.
But I am using 2.2.22 and still have a problem like that.
Upon reconnect, the mongo client reconnects to a a secondary instead of a newly selected primary which then is, I cannot write to the database.
My connection string is like mongodb://mongo1,mongo2,mongo3/db?replicaSet=rs

Related

PouchDb in NodeJs: replication ceases after half an hour. Why?

I've developed a system with CouchDB 2.2.0 as the master database, PouchDB 7.0.0 in VueJS clients and a database monitor server using PouchDB under NodeJS 8.11.1.
I can change data in CouchDB using Fauxton and the browser and mobile (PWA) clients update quickly even if left running for days. This is NOT true of the server running PouchDB in NodeJS. It will faithfully respond to the same changes unless there are no changes for 20 minutes or more, after that it simply silently ignores any and all events in CouchDB
I am setting about preparing a skeletal implementation with NodeJS and Pouch and as few other dependencies as possible and will update this question if I discover something; in the meantime I would like to ask...
Is there some well known reason why this might be happening?
How can I track down the cause without starting from scratch and gradually rebuilding the complete app brick by brick until it fails?
Update 18-10-03
I seem to have solved the problem by using an fs writeStream instead of console.log, without really understanding why that should make a difference.
My complete test app looks like this :
const fs = require('fs');
const PouchDB = require('pouchdb');
const adptrMemory = require('pouchdb-adapter-memory');
var stream = fs.createWriteStream("/tmp/pouchLog", {flags:'a'});
const LG = (msg) => (stream.write(`${msg}
`));
const movesDB = process.env.LOCAL_DB;
LG(`Local :: ${movesDB}`);
LG(`Remote :: ${process.env.REMOTE_DB}`);
PouchDB.plugin(adptrMemory);
const movesDatabaseLocal = new PouchDB(movesDB);
const movesDatabaseRemote = new PouchDB(process.env.REMOTE_DB);
const repFromFilter = 'post_processing/by_new_inventory';
movesDatabaseLocal.replicate.from(movesDatabaseRemote, {
live: true,
retry: true,
filter: repFromFilter,
})
.on('change', (response) => {
LG(`${movesDB} *** NEW EXCHANGE REQUEST DELTA *** `);
LG(`Database replication from: ${response.docs.length} records.`);
})
.on('active', () => {
LG(`${movesDB} *** NEW EXCHANGE REQUEST REPLICATION RESUMED ***`);
})
.on('paused', () => {
LG(`${movesDB} *** NEW EXCHANGE REQUEST REPLICATION ON HOLD ***`);
})
.on('denied', (info) => {
LG(`${movesDB} *** NEW EXCHANGE REQUEST REPLICATION DENIED *** ${info}`);
})
.on('error', err => LG(`Database error ${err}`));
Note that I still have not built back all the original functionality. I can say that the failure after an idle period does occur if the above code uses console.log, but goes away after switching to streamed logging.

MongoDB queries are taking 2-3 seconds from Node.js app on Heroku

I am having major performance problems with MongoDB. Simple find() queries are sometimes taking 2,000-3,000 ms to complete in a database with less than 100 documents.
I am seeing this both with a MongoDB Atlas M10 instance and with a cluster that I setup on Digital Ocean on VMs with 4GB of RAM. When I restart my Node.js app on Heroku, the queries perform well (less than 100 ms) for 10-15 minutes, but then they slow down.
Am I connecting to MongoDB incorrectly or querying incorrectly from Node.js? Please see my application code below. Or is this a lack of hardware resources in a shared VM environment?
Any help will be greatly appreciated. I've done all the troubleshooting I know how with Explain query and the Mongo shell.
var Koa = require('koa'); //v2.4.1
var Router = require('koa-router'); //v7.3.0
var MongoClient = require('mongodb').MongoClient; //v3.1.3
var app = new Koa();
var router = new Router();
app.use(router.routes());
//Connect to MongoDB
async function connect() {
try {
var client = await MongoClient.connect(process.env.MONGODB_URI, {
readConcern: { level: 'local' }
});
var db = client.db(process.env.MONGODB_DATABASE);
return db;
}
catch (error) {
console.log(error);
}
}
//Add MongoDB to Koa's ctx object
connect().then(db => {
app.context.db = db;
});
//Get company's collection in MongoDB
router.get('/documents/:collection', async (ctx) => {
try {
var query = { company_id: ctx.state.session.company_id };
var res = await ctx.db.collection(ctx.params.collection).find(query).toArray();
ctx.body = { ok: true, docs: res };
}
catch (error) {
ctx.status = 500;
ctx.body = { ok: false };
}
});
app.listen(process.env.PORT || 3000);
UPDATE
I am using MongoDB Change Streams and standard Server Sent Events to provide real-time updates to the application UI. I turned these off and now MongoDB appears to be performing well again.
Are MongoDB Change Streams known to impact read/write performance?
Change Streams indeed affect the performance of your server. As noted in this SO question.
As mentioned in the accepted answer there,
The default connection pool size in the Node.js client for MongoDB is 5. Since each change stream cursor opens a new connection, the connection pool needs to be at least as large as the number of cursors.
const mongoConnection = await MongoClient.connect(URL, {poolSize: 100});
(Thanks to MongoDB Inc. for investigating this issue.)
You need to increase your pool size to get back your normal performance.
I'd suggest you do more log works. Slow queries after restarted for a while might be worse than you might think.
For a modern database/web app running on a normal machine, it's not very easy to encounter with performance issues if you are doing right. There might be a memory leak or other unreleased resources, or network congestion.
IMHO, you might want to determine whether it's a network problem first, and by enabling slow query log on MongoDB and logging in your code where the query begins and ends, you could achieve this.
If the network is totally fine and you see no MongoDB slow queries, that means something goes wrong in your own application. Detailed logging might really help where query goes slow.
Hope this would help.

Riak connectivity from Node

this is probably not a bug but rather a gap in my understanding but putting it here as afraid havent been able to find a way so far. Appreciate if you can provide your inputs please.
I'm trying to connect to my Riak cluster (hosted on AWS) of 3 nodes via two options - 1) Using an ejabberd server, and 2) using a Node server.
Connecting from the ejabberd server is successful after I put the hostname and port in the ejabberd configuration, but when I use a simple Node server (code below), I get the "Error: No RiakNodes available to execute command." error. Am I missing out on something here please - I can confirm that the 3 nodes are indeed up with Riak running? Note that if I dont do the client ping on the nodes, the server doesnt throw any error, so it is probably got to do with how pings are handled. The same server (without the ping) gives an ECONNREFUSED error if one of the nodes are brought down. So clearly the connection is going through but not the ping.
Apologize if am missing out on something basic here ... even the firewall settings for the Riak nodes have been set to all inbound, so it is not a case of the ejabberd server having access but not the Node server.
var async = require('async');
var assert = require('assert');
var logger = require('winston');
var Riak = require('basho-riak-client');
logger.remove(logger.transports.Console);
logger.add(logger.transports.Console, {
level : 'debug',
colorize : true,
timestamp : true
});
var nodes = [
'ip-xx-xx-xx-xx:8087',
'ip-xx-xx-xx-xx:8087',
'ip-xx-xx-xx-xx:8087'
];
var client = new Riak.Client(nodes, function (err, c) {
logger.info('Now inside Riak.Client');
// NB: at this point the client is fully initialized, and
// 'client' and 'c' are the same object
});
client.ping(function (err, rslt) {
logger.info('Now entered client.ping');
if (err) {
logger.info('There is an error encountered in client.ping');
throw new Error(err);
} else {
// On success, ping returns true
logger.info('client.ping has resulted in success!');
assert(rslt === true);
}
});

Replica Set not working as expected

I have configured like below and my MongoDB don't need username or password:
mongo: {
module: 'sails-mongo',
url: "mongodb://127.0.0.1:27017/mydb",
replSet: {
servers: [
{
host: "127.0.0.1",
port : 27018
},
{
host: "127.0.0.1",
port : 27019
}
],
options: {connectWithNoPrimary:true, rs_name:"rs0"}
}
}
It's working fine, meaning I do not get a connection error and I am able to do querying. But when I brought down 127.0.0.1:27017, 127.0.0.1:27018 becomes PRIMARY as if I did a rs.status(). After this, I am no longer able to do any query and keep getting the following:
Error: no open connections
I am sure that I setup replica-set in my local machine correctly as I used MongoDB native driver to test the above mentioned scenario (bring down PRIMARY and SECONDARY take over as PRIMARY) and there is no problem.
var url = 'mongodb://127.0.0.1:27017,127.0.0.1:27018,127.0.0.1:27019/mydb?w=0&wtimeoutMS=5000&replicaSet=sg1&readPreference=secondary';
mongodb.MongoClient.connect(url, function(err, result) {
if(err || result === undefined || result === null) {
throw err;
} else {
db = result;
}
});
ok I found the answer. This message emitted because of session.js. I commented everything in the file and now it is working. The reason I guess is in session.js, it only pointing to a single host, which is the original PRIMARY. when you bring down this mongodb PRIMARY, session.js no longer can connect so it threw exception. I also tried the mongodb URL string in sessions.js by putting in also the hosts ip in the replica set (mongodb://127.0.0.1:27017,127.0.0.1:27018,127.0.0.1:27019/mydb) but failed to "sails lift". When put only single host then it is fine.
now if I need to store sessions info, I need to start another mongodb instance then session.js point to this new instant.

Error setting TTL index on collection : sessions (MongoDB/MongoHQ)

I'm able to connect to my primary DB no problem, but when I try to connect to my replica set, I get the TTL error. I've done my best to include all relevant code examples, but please ask if you need to see something that's not included. This is driving me bananas. The DB is at mongoHQ.
So, the issue:
I can connect to my primary set (workingDB)
I cannot connect to my replica set (failingDB)
I cannot connect when trying to connect to both(mongoHQ).
Code example
mongoHQ = "mongodb://<user>:<password>#candidate
.14.mongolayer.com:10120/dbName,mongodb://<user>:<password>#candidate
.15.mongolayer.com:10120"
failingDB = "mongodb://<user>:<password>#candidate
.14.mongolayer.com:10120/dbName"
workingDB = "mongodb://<user>:<password>#candidate
.15.mongolayer.com:10120/dbName"
# DB Options
opts =
mongos: true
server:
auto_reconnect: true
# Connect to DB
mongoose.connect mongoHQ, opts
# express/mongo session storage
app.use express.session(
secret: "Secrets are for children"
cookie:
maxAge: process.env.SESSION_TTL * 3600000
httpOnly: false
store: new mongoStore(
url: mongoHQ
collection: "sessions"
, ->
console.log "We're connected to the session store"
return
)
)
# Error: Error setting TTL index on collection : sessions
# * Connecting to "workingDB" works as expected.
# * Connecting to "failingDB" throws the same TTL Error
# * candidate.14 is the primary set, candidate.15 is the replica set
Perhaps a bit late, but I was today getting similar errors with MongoHQ and Mongoose. I solved it by removing the option mongos: true (at least for the moment, fingers crossed). I think that option is not really needed for replica sets (only when using mongos servers):
http://mongoosejs.com/docs/connections.html#replicaset_connections
http://support.mongohq.com/languages/mongoose.html
Also, it's better to wait for the connection being established before trying to set up MongoStore, for example:
mongoose.connect(mongoHQ);
var db = mongoose.connection;
db.on('error', function () {
// Panic
});
db.on('connected', function() {
var app = express();
app.use(express.session(sessionOptions));
// etc...
app.listen(config.port, function() {
console.log('Listening on port ', config.port);
});
});
I was able to resolve the issue by simply removing the reference to the replica set in the URI
mongoHQ = "mongodb://<user>:<password>#candidate.15.mongolayer.com:10120/dbName";

Resources