I have configured like below and my MongoDB don't need username or password:
mongo: {
module: 'sails-mongo',
url: "mongodb://127.0.0.1:27017/mydb",
replSet: {
servers: [
{
host: "127.0.0.1",
port : 27018
},
{
host: "127.0.0.1",
port : 27019
}
],
options: {connectWithNoPrimary:true, rs_name:"rs0"}
}
}
It's working fine, meaning I do not get a connection error and I am able to do querying. But when I brought down 127.0.0.1:27017, 127.0.0.1:27018 becomes PRIMARY as if I did a rs.status(). After this, I am no longer able to do any query and keep getting the following:
Error: no open connections
I am sure that I setup replica-set in my local machine correctly as I used MongoDB native driver to test the above mentioned scenario (bring down PRIMARY and SECONDARY take over as PRIMARY) and there is no problem.
var url = 'mongodb://127.0.0.1:27017,127.0.0.1:27018,127.0.0.1:27019/mydb?w=0&wtimeoutMS=5000&replicaSet=sg1&readPreference=secondary';
mongodb.MongoClient.connect(url, function(err, result) {
if(err || result === undefined || result === null) {
throw err;
} else {
db = result;
}
});
ok I found the answer. This message emitted because of session.js. I commented everything in the file and now it is working. The reason I guess is in session.js, it only pointing to a single host, which is the original PRIMARY. when you bring down this mongodb PRIMARY, session.js no longer can connect so it threw exception. I also tried the mongodb URL string in sessions.js by putting in also the hosts ip in the replica set (mongodb://127.0.0.1:27017,127.0.0.1:27018,127.0.0.1:27019/mydb) but failed to "sails lift". When put only single host then it is fine.
now if I need to store sessions info, I need to start another mongodb instance then session.js point to this new instant.
Related
I have 2 nodejs application on Linux CentOs7 server. The first is running on the main domain and the second on a subdomain. Both have to connect to the same MongoDb replicaset but on different databases. They have different username and password in the connection string.
The application on the main domain is connecting without problems, but the subdomain gets the error: double colon in host identifier.
This is the config file for the MongoDb on the subdomain:
module.exports = {
'secret': 'mysecret',
'database': 'mongodb://myUID:myPass#127.0.0.1:27017,127.0.0.1:27018,127.0.0.1:27019/mySubDomainApp,replset: { rs_name: "rs0" }',
'hashidsecret': 'theSecret',
'cryptrsecret': 'thecryptosecret'
};
I found the solution in the config file:
module.exports = {
'secret': 'jK5skCC5spUWqrs7p',
'database': 'mongodb://fIujhYes:24KWWisPjfB52#127.0.0.1:27017/challenger?replicaSet=rs0',
'hashidsecret': 'MCZ4584jHMQsfC',
'cryptrsecret': 'wYrdS8KV51Rsvd',
'presetRoles': ['systemadmin', 'admin']
};
Changing ,replset: { rs_name: "rs0" } to ?replicaSet=rs0 did the trick.
But very strange that the first config file works on the main domain on the same server.
this is probably not a bug but rather a gap in my understanding but putting it here as afraid havent been able to find a way so far. Appreciate if you can provide your inputs please.
I'm trying to connect to my Riak cluster (hosted on AWS) of 3 nodes via two options - 1) Using an ejabberd server, and 2) using a Node server.
Connecting from the ejabberd server is successful after I put the hostname and port in the ejabberd configuration, but when I use a simple Node server (code below), I get the "Error: No RiakNodes available to execute command." error. Am I missing out on something here please - I can confirm that the 3 nodes are indeed up with Riak running? Note that if I dont do the client ping on the nodes, the server doesnt throw any error, so it is probably got to do with how pings are handled. The same server (without the ping) gives an ECONNREFUSED error if one of the nodes are brought down. So clearly the connection is going through but not the ping.
Apologize if am missing out on something basic here ... even the firewall settings for the Riak nodes have been set to all inbound, so it is not a case of the ejabberd server having access but not the Node server.
var async = require('async');
var assert = require('assert');
var logger = require('winston');
var Riak = require('basho-riak-client');
logger.remove(logger.transports.Console);
logger.add(logger.transports.Console, {
level : 'debug',
colorize : true,
timestamp : true
});
var nodes = [
'ip-xx-xx-xx-xx:8087',
'ip-xx-xx-xx-xx:8087',
'ip-xx-xx-xx-xx:8087'
];
var client = new Riak.Client(nodes, function (err, c) {
logger.info('Now inside Riak.Client');
// NB: at this point the client is fully initialized, and
// 'client' and 'c' are the same object
});
client.ping(function (err, rslt) {
logger.info('Now entered client.ping');
if (err) {
logger.info('There is an error encountered in client.ping');
throw new Error(err);
} else {
// On success, ping returns true
logger.info('client.ping has resulted in success!');
assert(rslt === true);
}
});
I've got a node app built on Hapi using MongoDB and mongoose. Locally, I can use the app without issue. I can connect to the db, add data, find it, etc.
I've created an Ubuntu 14.04 x64 droplet on Digital Ocean.
I can ssh into my droplet and verify that my db is there with the correct name. I'm using dokku-alt to deploy and I have linked the db name to the app using dokku's mongodb:link appName mydb
I was having issues once I deployed the app where it would hang and eventually timeout. After a lot of debugging and commenting out code I found that any time I try to hit mongo like this the app will hang:
var User = request.server.plugins.db.User;
User
.findOne({ id: request.auth.credentials.profile.raw.id })
.exec(function(err, user){
// do something
});
Without this, the app loads fine, albeit without data. So my thought is that mongoose is never properly connecting.
I'm using grunt-shell-spawn to run a script which checks if mongo is already running, if not it starts it up. I'm not 100% certain that this is needed on the droplet, but I was having issues locally where mongo was already running... script:
/startMongoIfNotRunning.sh
if pgrep mongod; then
echo running;
else
mongod --quiet --dbpath db/;
fi
exit 0;
/Gruntfile.js
shell: {
make_dir: {
command: 'mkdir -p db'
},
mongodb: {
command: './startMongoIfNotRunning.sh',
options: {
stdin: false,
}
}
},
And here's how I'm defining the database location:
/index.js
server.register([
{ register: require('./app/db'), options: { url: process.env.MONGODB_URL || 'mongodb://localhost:27017/mydb' } },
....
/app/db/index.js
var mongoose = require('mongoose');
var _ = require('lodash-node');
var models = require('require-all')(__dirname + '/models');
exports.register = function(plugin, options, next) {
mongoose.connect(options.url, function() {
next();
});
var db = mongoose.connection;
plugin.expose('connection', db);
_.forIn(models, function(value, key) {
plugin.expose(key, value);
});
};
exports.register.attributes = {
name: 'db'
};
My app is looking for db files in db/. Could it be that dokku's mongodb:link appName mydb linked it to the wrong location? Perhaps process.env.MONGODB_URL is not being set correctly? I really don't know where to go from here.
It turns out the solution to my problem was adding an entry to the hosts file of my droplet to point to the mongo db url:
127.0.0.1 mongodb.myurl.com
For some reason, linking the db to my app with Dokku didn't add this bit. I would have thought that it was automatic. The app container's host file did get a mongodb entry when i linked the db to the app.
I made a simple NodeJS App, with Mongoose as MongoDB Driver. And connected to a mongodb replica set. The App is working fine until I shut down the current primary, When the PRIMARY is down, the replica set automatically elected a new PRIMARY. But, after that the node application doesn't seems to be responding for DB queries.
CODE: DB Connection
var options = {
server: {
socketOptions: {
keepAlive: 1,
connectTimeoutMS:30000,
socketTimeoutMS:90000 }
},
replset: {
socketOptions: {
keepAlive: 1,
connectTimeoutMS : 30000 ,
socketTimeoutMS: 90000
},
rs_name: 'rs0'
} };
var uri = "mongodb://xxx.xxx.xxx.xxx:27017,xxx.xxx.xxx.xxx:27017,xxx.xxx.xxx.xxx:27017/rstest";
mongoose.connect(uri,options);
CODE: DB Query
router.('/test',function(req,res){
var testmodel = new testModel('test') ;
testmodel.save(function (err, doc,numberAffected) {
if (err) {
console.log("ERROR: "+ err);
res.status = 404;
res.end;
}else{
console.log("Response sent ");
res.status = 200;
res.end;
}
});
});
Steps Followed
Created a MongoDB replica set in three VMs.
Created a simple nodeJS App (Express + Mongoose) with a test API as above
Sent GET request to 'test' continuously with some time interval to the app from a local system.
Took the PRIMARY instance down
Console will log "ERROR: Error: connection closed"
APPLICATION STOPPED RESPONDING TO REQUESTS
Varsions:
"express": "4.10.6",
"mongodb": "1.4.23",
"mongoose": "3.8.21",
A sample app that I have done for debugging this issue is available at https://melvingeorge#bitbucket.org/melvingeorge/nodejsmongorssample.git
I am not sure if this is a bug or some mis-configuration from my end. How to solve this issue ?
Write operations are made only on master instance. It will take some time for replica set to select a new primary server.
from http://docs.mongodb.org/manual/faq/replica-sets/
How long does replica set failover take?
It varies, but a replica set will select a new primary within a
minute.
It may take 10-30 seconds for the members of a replica set to declare
a primary inaccessible. This triggers an election. During the
election, the cluster is unavailable for writes.
The election itself may take another 10-30 seconds.
check your code with read operations (find/count)
as long as there is not a master instance, you can't do write operations
The 'rs_name' in replset options is necessary to specify a replicaSet. You can use mongoose.createConnection(uri, conf, callback) and get final conf in callback.
It looks like this got fixed in NODE-818 / 2.2.10.
But I am using 2.2.22 and still have a problem like that.
Upon reconnect, the mongo client reconnects to a a secondary instead of a newly selected primary which then is, I cannot write to the database.
My connection string is like mongodb://mongo1,mongo2,mongo3/db?replicaSet=rs
Hello I'm trying to get Node/Mongo service going on Openshift, here's what it looks like:
var db = new mongodb.Db('myServiceName',
new mongodb.Server('mongodb://$OPENSHIFT_MONGODB_DB_HOST','$OPENSHIFT_MONGODB_DB_PORT', {}));
db.open(function (err, db_p) {
if (err) { throw err; }
db.authenticate('$USER', '$PASS', function (err, replies) {
if (err) { throw err; }
// should be connected and authenticated.
// ...
The app was created using rhc:
$ rhc create-app myServiceName nodejs-0.10 mongodb-2.4
The console shows the app was started and is running, and on cURL the response is 503
My logs don't show an error, however, the dB is obviously not live. Can anyone help?
If your mongodb driver supports connection with username/password, then use OPENSHIFT_MONGODB_DB_URL instead of OPENSHIFT_MONGODB_DB_HOST
OPENSHIFT_MONGODB_DB_URL gives you this format:
mongodb://admin:password#127.4.99.1:27017/
and OPENSHIFT_MONGODB_DB_HOST gives you this format:
ip addres, ex: 127.4.99.1
So you can just use OPENSHIFT_MONGODB_DB_URL to connect and authenticate at the same time
with mongoskin, you can just do this:
var db = require('mongoskin').db(process.env.OPENSHIFT_MONGODB_DB_URL + 'dbname'+ '?auto_reconnect=true',
{safe: true, strict: false}
);
It looks like you are attempting to connect to a server named "$OPENSHIFT_MONGODB_DB_HOST", (not a valid URL).
Instead, you'll probably want to read the value of the OPENSHIFT_MONGODB_DB_HOST environment variable to find your connection information:
process.env.OPENSHIFT_MONGODB_DB_HOST
I have some additional notes up here: https://www.openshift.com/blogs/getting-started-with-mongodb-on-nodejs-on-openshift