Sails mongo reconnect - node.js

I am using sails 1.0.0-37 and sails-mongo 1.0.0-10 .
When sails is lifted, if mongo db server is up and running, everything is okay. If mongo db goes down, and the node.js tries to access the mongo db as part of a functionality and it times out, an internal server error is shown to the user. This is all okay. However, when mongo comes back up, sails no longer reconnects to it and throws out this error:
" AdapterError: Unexpected error from database adapter: fn called its error exit with:{ MongoError: Topology was destroyed } "
I set autoReconnect: true as part of mongodb adapter's options.. This reconnection works only if node.js does not try to access mongodb server while it is down.. How to fix this? Otherwise it's not possible to use sails 1.0 and sails-mongo in prod?

I faced the same problem, and here is the explanation and solution:
If you don't set "reconnectTries", it will be set 30 by default. After 30 attempts, sails couldn't connect to mongo and throw "Topology was destroyed".
For me, the solution is to set reconnectTries to Number.MAX_VALUE
default: {
adapter: 'sails-mongo',
url: 'mongodb://admin:admin123#127.0.0.1:27017/datastore?authSource=admin',
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 1000
}
I hopes that helped.

Related

postgresql and node with redis still making connection to db pool?

I'm a bit stuck here and was hoping to get some help.
My node application has a seperate module where I connect to postgres and export the pool as so
const {Pool,Client} = require('pg');
const pool = new Pool({
user: process.env.POSTGRES_USER,
host: process.env.POSTGRES_URL,
database: process.env.POSTGRES_DATABASE,
password: process.env.POSTGRES_PASSWORD,
port: process.env.POSTGRES_PORT,
keepAlive: 0,
ssl:{ rejectUnauthorized: false,
sslmode:require},
connectionTimeoutMillis: 10000, // 10 seconds
allowExitOnIdle:true,
max: 10
});
pool.connect()
.then(() => console.log('postgress connected'))
.catch(err => console.error(err))
module.exports = pool
On my route, I have redis cache as middleware, this works as expected and can confirm it is being served up by redis, the logic in the route does not run when the request is cached, however I was doing some load testing to see how everything would handle spikes and noticed I started to get errors from postgres
Error: timeout exceeded when trying to connect
I also got errors talking about max connections etc.
I have tried to increase the max pool connection but still seem to get this error when running some larger load tests.
My question is, why, would PG be trying to connect if the connection should be shared? Additionally, why is it even trying to connect if the request is cached?
Any help would be appreciated!
Apparently some of your stress test cases are missing the redis cache. You haven't shown any code relevant to that, so what more can be said?
The error you show is not generated by PostgreSQL, it is generated by node's 'pg' module. You configured it to only allow 10 simultaneous connections. If more than that are requested, they have to line up and wait. And you also configured it to wait only for 10 seconds before bombing out with an error, and that is exactly what you are seeing.
You vaguely allude to other errors, but you would have to share the actual error message with us if you want help.
The system seems to be operating as designed. You did a stress test to see what would happen, and you have seen what happens.

Mongoose connection to replica set not working

I am running my own MongoDb Replica Set on Kubernetes.
It has 3 members, I exposed them all via NodePort.
I can connect to it via shell:
(feel free to connect, it's an empty, isolated example that will be destroyed)
mongo mongodb://stackoverflow:practice#134.122.99.184:31064,134.122.99.184:31086,134.122.99.184:32754/thirty3?authSource=admin
However, I cannot connect to it via mongoose 5.11.12 using the same connection string.
It only works until mongoose 4.5.8
mongoose.connect("mongodb://stackoverflow:practice#134.122.99.184:31064,134.122.99.184:31086,134.122.99.184:32754/thirty3?authSource=admin&replicaSet=thirty3&?retryWrites=true&w=majority",
{
useNewUrlParser: true,
poolSize: 5,
useUnifiedTopology: true,
serverSelectionTimeoutMS: 5000, // Timeout after 5s instead of 30s
})
I tried tons of configurations, gssapiServiceName=mongodb, replicaSetName=thirty3 (I checked the replica set name by running rs.conf() ) and many more other configurations.
My question is - is there something wrong with mongoose handling these types of communications?
I have found similar issues that indicate downgrading as a solution, but downgrading is not ideal unless impossible to fix it normally.
Please try the code samples above, the database is open for connections with the credentials exposed.
This configuration works for me in Local MongoDB with a replica set.
await mongoose.connect("mongodb://localhost:27017/movieku", { family: 4 })
Source: https://docs.w3cub.com/mongoose/connections

poolSize reaches the specified size , which causes application to slow down drastically

I have 3 replica set running and I am also using the cluster module to fork 3 other processes ( the number of replica set created, does not have anything to do with the number of process forked ). In mongoose connect method i have the following option set
"use strict";
const mongoose = require("mongoose");
const config = require("../config.js");
// Set up mongoose connection
mongoose.connect( config.mongoURI, {
useNewUrlParser: true,
// silent deprecation warning
useCreateIndex: true,
// auto reconnect to db
autoReconnect: true,
// turn off buffering, and fail immidiately mongodb disconnects
bufferMaxEntries: 0,
bufferCommands: false,
keepAlive: true,
keepAliveInitialDelay: 450000,
// number of socket connection to keep open
poolSize: 1000
}, error => {
if (error) {
console.log(error);
}
});
module.exports = mongoose.connection;
The above code is in a file named db.js. In my server.js which starts the express application i require db.js.
Whenever i reload the webpage multiple times it get's to a point were the app slows down to load drastically ( all this started happening when i decided to use a replica set ). I connected to mongdb through mongo shell and ran db.serverStatus().connections everytime i reloaded the page the current field increases ( which is what is expected anytime a new connection is made to mongodb ), but the problem is whenever the current field reaches the specified poolSize the application takes a lot of time to load. I tried calling db.disconnect() whenever the end event is emitted on the req express object, which will disconnect from mongodb ( this worked as expected but since i am using stream changes the above solution to close opend connections will throw MongoError: Topology was destroyed. The error been throwed is not the problem, the problem is preventing the app to slow down drastically if the currently opened connection hits the specified poolSize.
I also tried setting maxIdleTimeMS in the mongodb connection string, and it is not working ( maybe mongoose does not support it )
Note: whenever i run db.currentOps() all active connections are set to false
I actually found out the cause of this issue. Since i am heavily using change streams in the application, the higher the number of change streams you create the higher the number of poolSize you will need. This issue have also been reported on CORE SERVER board in mongodb jira platform
DOCS-11270
NODE-1305
Severe Performance Drop with Mongodb change streams

MongoDB error not master and slaveOk=false on primary node with mongoose

I'm running a replicated mongoDB and I can connect to the master DB in the set no problem using mongo:
bash-4.2$ mongo --port 25023
MongoDB shell version: 3.2.6
rs0:PRIMARY>
But when using mongoose like this:
mongoose.connect('mongodb://127.0.0.1:25023/XXX', { useNewUrlParser: true });
I get:
XXX/node_modules/mongoose/lib/utils.js:452
throw err;
^
MongoError: not master and slaveOk=false
at queryCallback (/XXX/node_modules/mongodb-core/lib/cursor.js:247:25)
The suggestion was to do this:
mongodb://user:password#host:port,replicaSetHost:replicaSetPort/database?replicaSet=rs-someServer.
But that is a bit unwieldly. Is there no way to tell mongoose we are connecting to a master server and not a slave?
Second problem ... Even listing the other hosts didn't work:
mongoose.connect('mongodb://localhost:P,S1:P,S2:P/XXX?replicaSet=rs0', { useNewUrlParser: true });
I get this error even though I supplied the replicaSet:
(node:13757) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): MongoError: seed list contains no mongos proxies, replicaset connections requires the parameter replicaSet to be supplied in the URI or options object, mongodb://server:port/db?replicaSet=name
Here is where it gets strange: I CAN make this work. If I use this line:
mongoose.connect('mongodb://127.0.0.1:25023/XXX', { useNewUrlParser: true });
And let it fail, and then touch a JS file forcing a NodeJS restart, then it seems to start working?
Same issue, basically I wanted to connect to the primary node in a replica set (write concern), but I was intermittently connecting to a secondary node (race condition).
Solution / TLDR;
Add ?replicaSet=test&w=majority to your MONGO_URL connection string.
From the documentation...
Replica Set with a High Level of Write Concern
The following connects to a replica set with write concern configured to wait for replication to succeed across a majority of the data-bearing voting members, with a two-second timeout.
NOTE
For a replica set, specify the hostname(s) of the mongod instance(s) as listed in the replica set configuration.
mongodb://example1.com,example2.com,example3.com/?replicaSet=test&w=majority&wtimeoutMS=2000

No primary server available when failover happens: MongoDB, Node.js, Mongoose

I am currently facing an issue when failover happens in mongodb replica set. The app fails to reconnect to the newly elected primary server and fails to perform all subsequent write operations.
Restarting app reconnects successfully.
The failover happens instantly and a new primary is elected. However, the app fails to connect to the new primary.
mongodb version: 3.2.6
mongoose version: 4.3.4
node.js version:0.10.26
I was also facing a similar problem then I just changed
mongoose.connect(db)
to
mongoose.connect(db, {useNewUrlParser: true})
and now it is working fine
I have a primary, secondary and an arbiter set up running in three different nodes. This is how I connect using mongoose and the failover works perfectly fine.
mongoose.connect('mongodb://user:pwd#a.com:27017,b.com:27017,c.com:27017/dbName');
So, everything expect mongodb:// are variables.
I had this problem but it turned out to be that I was trying to access from a non-whitelisted IP.
mongoose.connect(url, { useNewUrlParser: true, useUnifiedTopology: true })
use like this, it will work fine.

Resources