4:32 AM (2 minutes ago)
i followed the replication (master-slave) setup guide, and got data replicated fine, but the applier is stopped with the following error. i searched manual and googled without finding much other than the error message itself. really appreciate if you could provide some clues as how to troubleshoot this. thanks!
arangosh [_system]> require("org/arangodb/replication").applier.state();
{
"state" : {
"running" : false,
"lastAppliedContinuousTick" : null,
"lastProcessedContinuousTick" : null,
"lastAvailableContinuousTick" : null,
"progress" : {
"time" : "2014-04-15T20:20:13Z",
"message" : "applier stopped",
"failedConnects" : 0
},
"totalRequests" : 5,
"totalFailedConnects" : 0,
"totalEvents" : 0,
"lastError" : {
"time" : "2014-04-15T20:20:13Z",
"errorMessage" : "no start tick",
"errorNum" : 1413
},
"time" : "2014-04-15T20:37:50Z"
},
"server" : {
"version" : "2.0.4",
"serverId" : "83323931320193"
},
"endpoint" : "tcp://master:8529",
"database" : "_system"
}
This might happen if you start the applier for the very first time without specifying a tick. Use
require("org/arangodb/replication").applier.start(1)
instead.
Related
Maybe I'm missing something, but according to the documentation and all the posts online, setting
cursor.maxTimeMS(1000).toArray(...)
should time out after 1000ms, and MongoDB should kill the operation after timeout.
But none of this is happening.
First, there is no timeout. It keeps going.
Second, I check db.currentOp() and the operation is still there, eating up all the memory. This later adds up and crashes the database with OOM.
Anyway running db.currentOp() after several minutes of no response prints:
{
"inprog" : [
{
"host" : "db2:27017",
"desc" : "conn20",
"connectionId" : 20,
"client" : "127.0.0.1:59214",
"clientMetadata" : {
"driver" : {
"name" : "nodejs",
"version" : "3.1.4"
},
"os" : {
"type" : "Linux",
"name" : "linux",
"architecture" : "x64",
"version" : "4.15.0-30-generic"
},
"platform" : "Node.js v8.10.0, LE, mongodb-core: 3.1.3"
},
"active" : true,
"currentOpTime" : "2018-09-14T00:10:29.903+0000",
"opid" : 11056,
"lsid" : {
"id" : UUID("78a2d853-30bf-4d6d-a208-0a150d9bf8be"),
"uid" : BinData(0,"47DEQpj8HBSa+/TImW+5JCeuQeRkm5NMpJWZG3hSuFU=")
},
"secs_running" : NumberLong(649),
"microsecs_running" : NumberLong(649968360),
"op" : "command",
As you can see, this has been running for 649 seconds, even though I explicitly specified 1000ms.
What is going on here? I've been pulling my hair out for two days and can't figure this out.
I had the same issue and had to update the mongodb driver from 3.1.1 to 3.3.5 and it worked like a charm!
i am new in mongodb and i am facing an issue, i have around millions of documents in my collectionand i am trying to find single entry using findOne({}) command and when i am trying to find recent entries then response comes in miliseconds but when i am trying to fetch older entries around 600 millionth document then it takes around 2 minutes on mongo shell and my node server gives
{ MongoErro : connection 1 to 127.0.0.1:27017 timed out }
and my nodejs server sends an empty response. can any one tell me what should i do to resolve this issueThanks in advance
explain gives me
db.contacts.find({"phoneNumber":"9165900137"}).explain("executionStats")
{
"queryPlanner" : {
"plannerVersion" : 1,
"namespace" : "meanApp.contacts",
"indexFilterSet" : false,
"parsedQuery" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"winningPlan" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"direction" : "forward"
},
"rejectedPlans" : [ ]
},
"executionStats" : {
"executionSuccess" : true,
"nReturned" : 1,
"executionTimeMillis" : 321188,
"totalKeysExamined" : 0,
"totalDocsExamined" : 495587806,
"executionStages" : {
"stage" : "COLLSCAN",
"filter" : {
"phoneNumber" : {
"$eq" : "9165900137"
}
},
"nReturned" : 1,
"executionTimeMillisEstimate" : 295230,
"works" : 495587808,
"advanced" : 1,
"needTime" : 495587806,
"needYield" : 0,
"saveState" : 3871779,
"restoreState" : 3871779,
"isEOF" : 1,
"invalidates" : 0,
"direction" : "forward",
"docsExamined" : 495587806
}
},
"serverInfo" : {
"host" : "li1025-15.members.linode.com",
"port" : 27017,
"version" : "3.2.16",
"gitVersion" : "056bf45128114e44c5358c7a8776fb582363e094"
},
"ok" : 1
}
As indicated in the explain plan results, the current query is doing Collection Scan. This means it has to scan every document in collection to produce the match and you have got about half a billion documents.
Try adding this index and it might take a bit to create it.
db.contacts.createIndex( { phoneNumber: 1 }, { background: true } )
Run the query once the index creation is successful, you must see a dramatic improvement in performance. To be certain whether index got picked up, try explain again and it should no longer say COLLSCAN.
Hello all I got stuck somewhere, I am working on mongodb with node.js where my collection data deleted automatically after 1 year on certain date and I want to stop that permanently how can I do that ? I have checked the available material on google but didn't got much success please help me friends ...
I have checked the index in one of my collection and it is showing data like this . Can you please tell me its is having TTL index or not
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "firstfive.teachers"
},
{
"v" : 1,
"key" : {
"_fts" : "text",
"_ftsx" : 1
},
"name" : "firstname_lastname_text",
"weights" : {
"firstName" : 1,
"lastName" : 1
},
"default_language" : "english",
"language_override" : "language",
"ns" : "firstfive.teachers",
"textIndexVersion" : 2
}
]
most likely you have TTL (time to limit) index defined on collection you're working with (https://docs.mongodb.com/v3.2/core/index-ttl/)
yu can check it by running db.your_collection.getIndexes() (it will be one with expireAfterSeconds) in mongo shell.
as any other index it can be removed - but do it carefully, apparently someone did it deliberately
I've written a back end node server for a multiplayer game I'm developing and most of the time each request takes about 20-100ms to resolve. However, sometimes (Maybe 1 out of 50 requests) I will do the same request and it will take 2000+ms to resolve.
The server is written entirely in node.js and is hosted on heroku. I am using mongoose to make the calls to the database.
Here is a screenshot of the logs, at the top you can see how queries normally function. The request comes in at 19:03:03.68 and the response is sent out at 19:03:03.73, saving all the data finishes at at 19:03:03.74. Heroku logs the request as taking 58ms which is the desired and expect outcome.
Below that is when the issue occurs. You can see multiple requests come in from two separate clients (Each client sends ~1 request per second which is correct) However the requests build up and after about 2000-5000ms they will all quickly resolve one after another. I’ve tried narrowing down the issue without much luck, but I believe it’s related to when I query the database as you can see multiple requests come in but the first query to the database doesn’t actually resolve until around 2300ms later. As far as I can tell these requests are identical to the ones that resolve in 20-100ms and occur completely at random.
The actual code is similar to this on the server (Simplified for the sake of this question):
console.log (“request received”);
Game.findOne({‘id’: gameID}, function(err, theGame){
console.log("First Query");
I also opened up the mongo shell for the database to look for queries taking an excessive amount of time (>2000ms) with this code:
db.system.profile.find( {millis: {$gt : 2000} } ).sort( { ts: 1} );
Here are the slightly modified results which should include everything relevant:
{ "op" : "update", "ns" : "theDb.players", "query" :
{ "_id" : ObjectId("572b8eb242d70903005df0df")
}, "updateobj" :
{ "$set" :
{ "lastSeen" : ISODate("2016-05-05T18:19:30.761Z"), "timeElapsed" : 16
}
}, "nscanned" : 1, "nscannedObjects" : 1, "nMatched" : 1, "nModified" : 1, "fastmod" : true, "keyUpdates" : 0, "writeConflicts" : 0, "numYield" : 0, "locks" :
{ "Global" :
{ "acquireCount":
{ "r" : NumberLong(2), "w" : NumberLong(2) }
}, "MMAPV1Journal" :
{ "acquireCount" :
{ "w" : NumberLong(2) }, "acquireWaitCount" :
{ "w" : NumberLong(1) }, "timeAcquiringMicros" :
{ "w" : NumberLong(7294179) }
}, "Database" :
{ "acquireCount" :
{ "w" : NumberLong(2) }
}, "Collection" :
{ "acquireCount" :
{ "W" : NumberLong(1) }
}, "oplog" :
{ "acquireCount" :
{ "w" : NumberLong(1) }
}
}, "milli" : 2298, "execStats" : {}, "ts" : ISODate("2016-05-05T18:19:33.060Z")
Second Result:
{ "op" : "update", "ns" : "theDb.connections", "query" :
{ "_id" : ObjectId("572b8eaf42d70903005df0dd")
}, "updateobj" :
{ "$set" :
{ "internalCounter" : 3, "lastCount" : 3, "lastSeen" : ISODate("2016-05-05T18:19:30.761Z"), "playerID" : 128863276517, "sinceLast" : 0
}
}, "nscanned" : 1, "nscannedObjects" : 1, "nMatched" : 1, "nModified" : 1, "keyUpdates" : 0, "writeConflicts" : 0, "numYield" : 0, "locks" :
{ "Global" :
{ "acquireCount" :
{ "r" : NumberLong(2), "w" : NumberLong(2)
}
}, "MMAPV1Journal" :
{ "acquireCount" :
{ "w" : NumberLong(2) }, "acquireWaitCount" :
{ "w" : NumberLong(1) }, "timeAcquiringMicros" :
{ "w" :NumberLong(7294149) }
}, "Database" :
{ "acquireCount" :
{ "w" : NumberLong(2) }
}, "Collection" :
{ "acquireCount" :
{ "W" : NumberLong(1) }
}, "oplog" :
{ "acquireCount" :
{ "w" : NumberLong(1) }
}
}, "millis" : 2299, "execStats" : {},"ts" : ISODate("2016-05-05T18:19:33.061Z")
I really need to ensure the latency for any request never exceeds 500ms otherwise it extremely irritating in the game itself. I’m really at a loss for what might be causing this and how to figure out more.
I'm assuming the cause for the issue is that timeAcquiringMicros is so long. I'm unsure of what is causing this though.
*Note, the client is requesting the data with just standard http requests, I’m not currently using any sockets.
Alright, I've finally solved the issue. The problem wasn't actually connected to anything that I had done. I was using the sandbox plan that mlab offers in connection to heroku which had my application competing for processing time with other people also using the sandbox plan. Their queries were slowing down the database causing those spikes in response times.
The solution: I had to upgrade to their shared cluster plan. Since upgrading I haven't had any irregularities in query times.
My site using mongodb for the chat application. Mongodb queries are getting timed out so i checked the db.currentOp(). Below is the currentOp() and Mongodb details,
637 active operations
750 inactive operations
Other details about mongodb:
Mongo db is running with sharding
I have two databases
a)First database having, two table only
b)Second database having , 5 tables
My questions are, why the current.Op() count got increased suddenly and what are the causes we have to taken care if currentOp() count is increased. Please help me on this and apologies for my bad English.
Below are the sample output of my currentOp()
MongoDB shell version: 1.8.2
> db.currentOp()
{
"inprog" : [
{
"opid" : "msdata1:234234234",
"active" : true,
"lockType" : "read",
"waitingForLock" : false,
"secs_running" : 43534,
"op" : "getmore",
"ns" : "local.oplog.rs",
"query" : {
},
"client_s" : "70.52.078.123:12345",
"desc" : "conn"
},
{
"opid" : "msdata1:2342323423",
"active" : true,
"lockType" : "read",
"waitingForLock" : false,
"secs_running" : 231231,
"op" : "query",
"ns" : "ichat.chatmemberlist",
"query" : {
"count" : "chatmemberlist",
"query" : {
"Mid" : "23423",
"bmid" : "23423"
}
},
"client_s" : "70.52.078.123:12345",
"desc" : "conn"
},
{
"opid" : "msdata1:2342323423",
"active" : false,
"lockType" : "write",
"waitingForLock" : true,
"op" : "update",
"ns" : "?ichat.useravail",
"query" : {
"Mid" : "23423"
},
"client_s" : "70.512.078234.423:12345",
"desc" : "conn"
},
...
...
...
From the limited amount of info, I can see that your queries are just running a really long time: "secs_running" : 231231, means 231 seconds. It's likely that you don't have enough resources available for the type of queries that you are running. That could be that you don't have enough memory, or perhaps too much queries that are acquiring a lock. If you're not on MongoDB 2.0.x yet, then you might want to upgrade to that too as it has vastly improved locking: http://blog.pythonisito.com/2011/12/mongodbs-write-lock.html
I would advice to check the mongodb.log file to see which queries are being slow, then use explain to figure out whether you've indexes on the columns and then either add indexes, or see how you can re-design your schema if that might look like a better solution.