I have three machine running on amazon cloud. I set the first primary set as follows
{
"_id" : "rs0",
"version" : 270805,
"members" : [
{
"_id" : 0,
"host" : "xxx.xxx.xxx.xxx:27017",
"priority" : 2
},
{
"_id" : 1,
"host" : "xxx.xxx.xxx.xxx:27017"
},
{
"_id" : 2,
"host" : "xxx.xxx.xxx.xxx:27017"
}
]
}
then the second machine will auto sync the config. however, the third machine will not synced the config.and all becomes [secondary]. when I set up the third machine rs conf using re.reconfig(conf, {force: true}). it will not set and throw error something like has a
config version >= to the new cfg version; cannot change config
Related
i am new in mongodb and i am facing an issue, i have around millions of documents in my collectionand i am trying to find single entry using findOne({}) command and when i am trying to find recent entries then response comes in miliseconds but when i am trying to fetch older entries around 600 millionth document then it takes around 2 minutes on mongo shell and my node server gives
{ MongoErro : connection 1 to 127.0.0.1:27017 timed out }
and my nodejs server sends an empty response. can any one tell me what should i do to resolve this issueThanks in advance
explain gives me
db.contacts.find({"phoneNumber":"9165900137"}).explain("executionStats")
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "meanApp.contacts",
"indexFilterSet" : false,
"parsedQuery" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"direction" : "forward"
},
"rejectedPlans" : [ ]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 1,
"executionTimeMillis" : 321188,
"totalKeysExamined" : 0,
"totalDocsExamined" : 495587806,
"executionStages" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 295230,
"works" : 495587808,
"advanced" : 1,
"needTime" : 495587806,
"needYield" : 0,
"saveState" : 3871779,
"restoreState" : 3871779,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 495587806
}
},
"serverInfo" : {
"host" : "li1025-15.members.linode.com",
"port" : 27017,
"version" : "3.2.16",
"gitVersion" : "056bf45128114e44c5358c7a8776fb582363e094"
},
"ok" : 1
}
As indicated in the explain plan results, the current query is doing Collection Scan. This means it has to scan every document in collection to produce the match and you have got about half a billion documents.
Try adding this index and it might take a bit to create it.
db.contacts.createIndex( { phoneNumber: 1 }, { background: true } )
Run the query once the index creation is successful, you must see a dramatic improvement in performance. To be certain whether index got picked up, try explain again and it should no longer say COLLSCAN.
Hello all I got stuck somewhere, I am working on mongodb with node.js where my collection data deleted automatically after 1 year on certain date and I want to stop that permanently how can I do that ? I have checked the available material on google but didn't got much success please help me friends ...
I have checked the index in one of my collection and it is showing data like this . Can you please tell me its is having TTL index or not
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "firstfive.teachers"
},
{
"v" : 1,
"key" : {
"_fts" : "text",
"_ftsx" : 1
},
"name" : "firstname_lastname_text",
"weights" : {
"firstName" : 1,
"lastName" : 1
},
"default_language" : "english",
"language_override" : "language",
"ns" : "firstfive.teachers",
"textIndexVersion" : 2
}
]
most likely you have TTL (time to limit) index defined on collection you're working with (https://docs.mongodb.com/v3.2/core/index-ttl/)
yu can check it by running db.your_collection.getIndexes() (it will be one with expireAfterSeconds) in mongo shell.
as any other index it can be removed - but do it carefully, apparently someone did it deliberately
So recently we redesigned our MongoDB database cluster to use SSL and replica sets in addition to the sharding we had already implemented. SSL wasn't too difficult to get working, we just needed to split up the private key and certificate and then everything worked fine. However, getting my Node.js app to connect to both mongos instances is proving to be more difficult than I anticipated.
Before we implemented replica sets, we just had two shards, each of them running a mongos router, and in mongoose I gave it the following connection string:
mongodb://Host1:27017,Host2:27017/DatabaseName
Then, in the options object to the connection, I passed in the following:
{mongos: true}
This seems to work just fine. However, after the replica sets are implemented, whenever I pass the mongos option, the application never connects. Our cluster is now setup so that there are 4 MongoDB servers in 2 replica sets of 2 servers each. The master in each replica set is also running a mongos router instance. I assumed I should be able to connect the same way as before, however it never connects. If I create the connection using just 1 shard with no options, the application connects just fine. However, this is not ideal as the whole point is to have redundancy among the router instances. Can anyone offer some insight here?
Here is the output of sh.status():
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("57571fc5bfe098f05bbbe370")
}
shards:
{ "_id" : "rs0", "host" : "rs0/mongodb-2:27018,mongodb-3:27018" }
{ "_id" : "rs1", "host" : "rs1/mongodb-4:27018,mongodb-5:27018" }
active mongoses:
"3.2.7" : 4
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "Demo", "primary" : "rs0", "partitioned" : true }
I was asked to output rs.config(), here it is from the 1st master node:
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "mongodb-2:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "mongodb-3:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("57571692c490a699f61e3784")
}
}
Alright, so I finally figured it out. I went through the logs on the server and saw that the client was trying to connect and wasn't using SSL so kept getting booted by the server. This was confusing to me because I set SSL in the server options and had the correct keys and cert bundle, as I was able to connect to a single instance just fine. Then I looked through the mongo driver options here. It shows that there are options you need to set for mongos itself regarding SSL. After setting these explicitly I was able to connect.
In summary, this options object allowed me to connect:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
}
}
while this options object did not:
var options = {
"server": {
"ssl": true,
"sslCA": sslCAbuffer,
"sslCert": sslCertbuffer,
"sslKey": sslKeybuffer
},
"mongos": true
}
I think the server object is probably redundant, but I left it in.
i am trying to connect the mongodb using mongodb.MongoClient.connect() with simple url connection string replicaset. When i start the server its throwing the error as
Error: Could not locate any valid servers in initial seed list
this is my code where i am passing three mongodb server as follows
var MongoClient = mongodb.MongoClient;
MongoClient.connect('mongodb://192.168.0.16,192.168.0.23,192.168.0.17/test', function(err, db) {
if(err){
console.error("Error! Exiting... Must start MongoDB first");
console.log("The error is :::::::::::::::", err);
process.exit(1);
}else{
console.log("Connection successful");
}
});
I have done replica set also.
I have three servers one act as a primary and other act as secondary.Using rs.status(), i can able to see that all server working fine.But still i receiving the same error.
mongodb version = 2.2.3
mongdb lib version = 1.3.18
{
"set" : "rs01",
"date" : ISODate("2015-01-09T07:35:15Z"),
"myState" : 1,
"members" : [
{
"_id" : 0,
"name" : "192.168.0.23:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 2079,
"optime" : Timestamp(1420787077000, 1),
"optimeDate" : ISODate("2015-01-09T07:04:37Z"),
"lastHeartbeat" : ISODate("2015-01-09T07:35:13Z"),
"pingMs" : 0
},
{
"_id" : 1,
"name" : "192.168.0.16:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2088,
"optime" : Timestamp(1420787077000, 1),
"optimeDate" : ISODate("2015-01-09T07:04:37Z"),
"self" : true
},
{
"_id" : 2,
"name" : "192.168.0.17:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1838,
"optime" : Timestamp(1420787077000, 1),
"optimeDate" : ISODate("2015-01-09T07:04:37Z"),
"lastHeartbeat" : ISODate("2015-01-09T07:35:14Z"),
"pingMs" : 0
}
],
"ok" : 1
}
But i don't know what could be a issue.This issue was occurring in my production setup also.
Well, looks straight forward. Are you sure you are running the mongod servers? If so, are they running on the default 27017 port (since you did not specify the port number, that would be the default).
I would simplify your connection string further and just use 1 server url -- for the sake of debugging. I would also explicitly specify a port number to spell it all out.
Is one of these servers a primary? Can you connect to it from Mongo shell? That would be the first test.
MongoDB supports bulk insert http://docs.mongodb.org/manual/core/bulk-inserts/
I have tried it in Meteor collection:
Orders.insert([
{ "cust_id" : "A123", "amount" : 500, "status" : "A", "_id" : "iZXL7ewBDALpic8Fj" },
{ "cust_id" : "A123", "amount" : 250, "status" : "A", "_id" : "zNrdBAxxeNZQ2yrL9" },
{ "cust_id" : "B212", "amount" : 200, "status" : "A", "_id" : "vev3pjJ8cmiDHHxe4" },
{ "cust_id" : "A123", "amount" : 300, "status" : "D", "_id" : "BBLngRhS76DgeHJQJ" }
]);
but it creates just
{ "0" : { "cust_id" : "A123", "amount" : 500, "status" : "A", "_id" : "iZXL7ewBDALpic8Fj"},
"1" : { "cust_id" : "A123", "amount" : 250, "status" : "A", "_id" : "zNrdBAxxeNZQ2yrL9" },
"2" : { "cust_id" : "B212", "amount" : 200, "status" : "A", "_id" : "vev3pjJ8cmiDHHxe4" },
"3" : { "cust_id" : "A123", "amount" : 300, "status" : "D", "_id" : "BBLngRhS76DgeHJQJ" },
"_id" : "6zWayeGtQCdfS65Tz" }
I need it for performance testing purposes. I need to fill and test database with thousands of testing items. I do inserts in foreach, but it takes too long to fill database.
Is here any workaround? Or can we expect Meteor will support this in next versions?
You could use exec (nodejs docs) to run a mongo script inside of meteor inside of a Meteor.startup on the server.
Example:
Meteor.startup(function () {
var exec = Npm.require('child_process').exec;
exec('mongo localhost:27017/meteor path-to/my-insert-script.js', function ( ) {
// done
});
});
Not optimum, but I think it's your best bet for now. You can also use the command line option --eval against Mongo in exec and pass the insert statement as a string to exec. That might look like this:
Meteor.startup(function () {
var exec = Npm.require('child_process').exec;
exec('mongo localhost:27017/meteor --eval \'db.Orders.insert(' + JSON.stringify(arrOfOrders) + ')\'', function ( ) {
// done
});
});
When inserting a lot of data into the DB, e.g., in a forEach loop, you want to make sure that there is no reactive content on your page that depends on it. Otherwise the reactive rerendering is going to slow your client down tremendously. You can easily insert several thousand document into a collection in a fraction of a second when all templates are disabled, while the same operation can take several minutes with your CPU at 100% on both the client as the server if there is relevant reactivity happening.
You may want to add a condition to any template whose content depend on this data such as:
Template.myTemplate.items = function() {
if (Session.get("active")) {
return Order.find();
}
}
Then you can deactivate all reactive rerendering before your forEach loop and reactivate it again afterwards (Session.set("active", false)).
You could use rawCollection which is node mongodb driver implemetation in Meteor.Collection.
await Orders.rawCollection().insertMany(arrOfOrders)
It works on 70M data in my case.
(await makes it synced so you should consider to use it or not for your purpose.)