I am running my own MongoDb Replica Set on Kubernetes.
It has 3 members, I exposed them all via NodePort.
I can connect to it via shell:
(feel free to connect, it's an empty, isolated example that will be destroyed)
mongo mongodb://stackoverflow:practice#134.122.99.184:31064,134.122.99.184:31086,134.122.99.184:32754/thirty3?authSource=admin
However, I cannot connect to it via mongoose 5.11.12 using the same connection string.
It only works until mongoose 4.5.8
mongoose.connect("mongodb://stackoverflow:practice#134.122.99.184:31064,134.122.99.184:31086,134.122.99.184:32754/thirty3?authSource=admin&replicaSet=thirty3&?retryWrites=true&w=majority",
{
useNewUrlParser: true,
poolSize: 5,
useUnifiedTopology: true,
serverSelectionTimeoutMS: 5000, // Timeout after 5s instead of 30s
})
I tried tons of configurations, gssapiServiceName=mongodb, replicaSetName=thirty3 (I checked the replica set name by running rs.conf() ) and many more other configurations.
My question is - is there something wrong with mongoose handling these types of communications?
I have found similar issues that indicate downgrading as a solution, but downgrading is not ideal unless impossible to fix it normally.
Please try the code samples above, the database is open for connections with the credentials exposed.
This configuration works for me in Local MongoDB with a replica set.
await mongoose.connect("mongodb://localhost:27017/movieku", { family: 4 })
Source: https://docs.w3cub.com/mongoose/connections
Related
With mongoose one can simply handle reconnections via connection-options:
let dbOptions = {
dbName: process.env.MONGO_DATABASE_NAME,
autoReconnect: true,
reconnectInterval: 1000,
reconnectTries: 10,
};
mongoose.connect(process.env.MONGO_URI, dbOptions);
// Create connection object.
const db = mongoose.connection;
In the native MongoDB driver (version 4.4) there are no similar connection options available:
https://mongodb.github.io/node-mongodb-native/4.4/interfaces/MongoClientOptions.html
What is the easiest way to handle database reconnections in case of a major error?
Wherever you've got the mongoose connection snippet from, it's outdated.
The autoReconnect and auto_reconnect options were a thing in Nodejs native driver before v4.0, and mongoose just proxied these options to the driver.
This is the documentation for the driver 3.7 with "autoReconnect" still there: http://mongodb.github.io/node-mongodb-native/3.7/api/global.html#MongoClientOptions and this is the commit where it's been removed: https://github.com/mongodb/node-mongodb-native/commit/e3cd9e684aea99be0430d856d6299e65258bb4c3#diff-f005e84d9066ef889099ec2bd907abf7900f76da67603e4130e1c92fac92533dL90
The option was "True" by default with a strong note to do not change this value unless you know exactly why you need to disable it.
v4 introduced many changes to the driver - refactoring to typescript, architectural changes, you can see it from the commit, right. One of the changes affected connection logic and pool management. There is no option to disable reconnection anymore. It's always reconnects regardless how you connect directly or via mongoose.
I am having issues connecting to MongoDb running remotely, and the connection error response I am getting from the server is somewhat weird.
My network access whitelist is set to allow all (0.0.0.0/0). Hence, my local robo3t installation was able to connect. However, I could not connect from my NodeJs code. Error is: "MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist"
IP whitelist seems to be an unlikely error, given that my local robo3t client is able to connect remotely to the same remote Mongo Atlas instance, as IP whitelist is allow-all.
How do I debug this kind of thing, please?
UPDATE: this is how I connect to MongoDb. Works well on local, too.
try {
const connectionString =
process.env.APP_ENV == "test"
? await getInMemoryMongoDbAdapter()
: `mongodb://${process.env.MONGODB_HOSTNAME}:${process.env.MONGODB_PORT}/${process.env.CBT_DATABASE_NAME}`;
logger.info(`Connecting to MongoDB service: ${connectionString}`);
await mongoose.connect(connectionString, {
useNewUrlParser: true,
useUnifiedTopology: true,
});
} catch (error) {
reject(error);
}
The logger line correctly shows: Connecting to MongoDB service: mongodb://<user>:<password>#cluster0-xxx.yyy.zzz.net:<port>/<database>
UPDATE 2:
My localhost also does not connect via this node app; whereas my robo3t (local MongoDb client) connects. I guess that means Heroku-specific issues can now be comfortably ruled out
A decade later, I found that for the connection parameters, I needed to supply the authSource and ssl options, as below:
{
useNewUrlParser: true,
useUnifiedTopology: true,
authSource: "admin",
ssl: true,
}
Neither one works without the other. Big shout-out to #darklightcode for all the insights he gave, leading me to dig deeper. Thanks man!
I have 3 replica set running and I am also using the cluster module to fork 3 other processes ( the number of replica set created, does not have anything to do with the number of process forked ). In mongoose connect method i have the following option set
"use strict";
const mongoose = require("mongoose");
const config = require("../config.js");
// Set up mongoose connection
mongoose.connect( config.mongoURI, {
useNewUrlParser: true,
// silent deprecation warning
useCreateIndex: true,
// auto reconnect to db
autoReconnect: true,
// turn off buffering, and fail immidiately mongodb disconnects
bufferMaxEntries: 0,
bufferCommands: false,
keepAlive: true,
keepAliveInitialDelay: 450000,
// number of socket connection to keep open
poolSize: 1000
}, error => {
if (error) {
console.log(error);
}
});
module.exports = mongoose.connection;
The above code is in a file named db.js. In my server.js which starts the express application i require db.js.
Whenever i reload the webpage multiple times it get's to a point were the app slows down to load drastically ( all this started happening when i decided to use a replica set ). I connected to mongdb through mongo shell and ran db.serverStatus().connections everytime i reloaded the page the current field increases ( which is what is expected anytime a new connection is made to mongodb ), but the problem is whenever the current field reaches the specified poolSize the application takes a lot of time to load. I tried calling db.disconnect() whenever the end event is emitted on the req express object, which will disconnect from mongodb ( this worked as expected but since i am using stream changes the above solution to close opend connections will throw MongoError: Topology was destroyed. The error been throwed is not the problem, the problem is preventing the app to slow down drastically if the currently opened connection hits the specified poolSize.
I also tried setting maxIdleTimeMS in the mongodb connection string, and it is not working ( maybe mongoose does not support it )
Note: whenever i run db.currentOps() all active connections are set to false
I actually found out the cause of this issue. Since i am heavily using change streams in the application, the higher the number of change streams you create the higher the number of poolSize you will need. This issue have also been reported on CORE SERVER board in mongodb jira platform
DOCS-11270
NODE-1305
Severe Performance Drop with Mongodb change streams
I am using sails 1.0.0-37 and sails-mongo 1.0.0-10 .
When sails is lifted, if mongo db server is up and running, everything is okay. If mongo db goes down, and the node.js tries to access the mongo db as part of a functionality and it times out, an internal server error is shown to the user. This is all okay. However, when mongo comes back up, sails no longer reconnects to it and throws out this error:
" AdapterError: Unexpected error from database adapter: fn called its error exit with:{ MongoError: Topology was destroyed } "
I set autoReconnect: true as part of mongodb adapter's options.. This reconnection works only if node.js does not try to access mongodb server while it is down.. How to fix this? Otherwise it's not possible to use sails 1.0 and sails-mongo in prod?
I faced the same problem, and here is the explanation and solution:
If you don't set "reconnectTries", it will be set 30 by default. After 30 attempts, sails couldn't connect to mongo and throw "Topology was destroyed".
For me, the solution is to set reconnectTries to Number.MAX_VALUE
default: {
adapter: 'sails-mongo',
url: 'mongodb://admin:admin123#127.0.0.1:27017/datastore?authSource=admin',
reconnectTries: Number.MAX_VALUE,
reconnectInterval: 1000
}
I hopes that helped.
I am currently facing an issue when failover happens in mongodb replica set. The app fails to reconnect to the newly elected primary server and fails to perform all subsequent write operations.
Restarting app reconnects successfully.
The failover happens instantly and a new primary is elected. However, the app fails to connect to the new primary.
mongodb version: 3.2.6
mongoose version: 4.3.4
node.js version:0.10.26
I was also facing a similar problem then I just changed
mongoose.connect(db)
to
mongoose.connect(db, {useNewUrlParser: true})
and now it is working fine
I have a primary, secondary and an arbiter set up running in three different nodes. This is how I connect using mongoose and the failover works perfectly fine.
mongoose.connect('mongodb://user:pwd#a.com:27017,b.com:27017,c.com:27017/dbName');
So, everything expect mongodb:// are variables.
I had this problem but it turned out to be that I was trying to access from a non-whitelisted IP.
mongoose.connect(url, { useNewUrlParser: true, useUnifiedTopology: true })
use like this, it will work fine.